python-numba/python-numba.changes
Todd R fa72637a9d Accepting request 644953 from home:TheBlackCat:branches:devel:languages:python:numeric
- Update to Version 0.40.1
  * PR #3338: Accidentally left Anton off contributor list for 0.40.0
  * PR #3374: Disable OpenMP in wheel building
  * PR #3376: Update 0.40.1 changelog and docs on OpenMP backend
- Update to Version 0.40.0
  + This release adds a number of major features:
    * A new GPU backend: kernels for AMD GPUs can now be compiled using the ROCm
     driver on Linux.
    * The thread pool implementation used by Numba for automatic multithreading
     is configurable to use TBB, OpenMP, or the old "workqueue" implementation.
     (TBB is likely to become the preferred default in a future release.)
    * New documentation on thread and fork-safety with Numba, along with overall
     improvements in thread-safety.
    * Experimental support for executing a block of code inside a nopython mode
     function in object mode.
    * Parallel loops now allow arrays as reduction variables
    * CUDA improvements: FMA, faster float64 atomics on supporting hardware, 
     records in const memory, and improved datatime dtype support
    * More NumPy functions: vander, tri, triu, tril, fill_diagonal
  + General Enhancements:
    * PR #3017: Add facility to support with-contexts
    * PR #3033: Add support for multidimensional CFFI arrays
    * PR #3122: Add inliner to object mode pipeline
    * PR #3127: Support for reductions on arrays.
    * PR #3145: Support for np.fill_diagonal
    * PR #3151: Keep a queue of references to last N deserialized functions.  Fixes #3026
    * PR #3154: Support use of list() if typeable.
    * PR #3166: Objmode with-block
    * PR #3179: Updates for llvmlite 0.25
    * PR #3181: Support function extension in alias analysis
    * PR #3189: Support literal constants in typing of object methods
    * PR #3190: Support passing closures as literal values in typing
    * PR #3199: Support inferring stencil index as constant in simple unary expressions
    * PR #3202: Threading layer backend refactor/rewrite/reinvention!
    * PR #3209: Support for np.tri, np.tril and np.triu
    * PR #3211: Handle unpacking in building tuple (BUILD_TUPLE_UNPACK opcode)
    * PR #3212: Support for np.vander
    * PR #3227: Add NumPy 1.15 support
    * PR #3272: Add MemInfo_data to runtime._nrt_python.c_helpers
    * PR #3273: Refactor. Removing thread-local-storage based context nesting.
    * PR #3278: compiler threadsafety lockdown
    * PR #3291: Add CPU count and CFS restrictions info to numba -s.
  + CUDA Enhancements:
    * PR #3152: Use cuda driver api to get best blocksize for best occupancy
    * PR #3165: Add FMA intrinsic support
    * PR #3172: Use float64 add Atomics, Where Available
    * PR #3186: Support Records in CUDA Const Memory
    * PR #3191: CUDA: fix log size
    * PR #3198: Fix GPU datetime timedelta types usage
    * PR #3221: Support datetime/timedelta scalar argument to a CUDA kernel.
    * PR #3259: Add DeviceNDArray.view method to reinterpret data as a different type.
    * PR #3310: Fix IPC handling of sliced cuda array.
  + ROCm Enhancements:
    * PR #3023: Support for AMDGCN/ROCm.
    * PR #3108: Add ROC info to `numba -s` output.
    * PR #3176: Move ROC vectorize init to npyufunc
    * PR #3177: Add auto_synchronize support to ROC stream
    * PR #3178: Update ROC target documentation.
    * PR #3294: Add compiler lock to ROC compilation path.
    * PR #3280: Add wavebits property to the HSA Agent.
    * PR #3281: Fix ds_permute types and add tests
  + Continuous Integration / Testing:
    * PR #3091: Remove old recipes, switch to test config based on env var.
    * PR #3094: Add higher ULP tolerance for products in complex space.
    * PR #3096: Set exit on error in incremental scripts
    * PR #3109: Add skip to test needing jinja2 if no jinja2.
    * PR #3125: Skip cudasim only tests
    * PR #3126: add slack, drop flowdock
    * PR #3147: Improve error message for arg type unsupported during typing.
    * PR #3128: Fix recipe/build for jetson tx2/ARM
    * PR #3167: In build script activate env before installing.
    * PR #3180: Add skip to broken test.
    * PR #3216: Fix libcuda.so loading in some container setup
    * PR #3224: Switch to new Gitter notification webhook URL and encrypt it
    * PR #3235: Add 32bit Travis CI jobs
    * PR #3257: This adds scipy/ipython back into windows conda test phase.
  + Fixes:
    * PR #3038: Fix random integer generation to match results from NumPy.
    * PR #3045: Fix #3027 - Numba reassigns sys.stdout
    * PR #3059: Handler for known LoweringErrors.
    * PR #3060: Adjust attribute error for NumPy functions.
    * PR #3067: Abort simulator threads on exception in thread block.
    * PR #3079: Implement +/-(types.boolean) Fix #2624
    * PR #3080: Compute np.var and np.std correctly for complex types.
    * PR #3088: Fix #3066 (array.dtype.type in prange)
    * PR #3089: Fix invalid ParallelAccelerator hoisting issue.
    * PR #3136: Fix #3135 (lowering error)
    * PR #3137: Fix for issue3103 (race condition detection)
    * PR #3142: Fix Issue #3139 (parfors reuse of reduction variable across prange blocks)
    * PR #3148: Remove dead array equal @infer code
    * PR #3153: Fix canonicalize_array_math typing for calls with kw args
    * PR #3156: Fixes issue with missing pygments in testing and adds guards.
    * PR #3168: Py37 bytes output fix.
    * PR #3171: Fix #3146.  Fix CFUNCTYPE void* return-type handling
    * PR #3193: Fix setitem/getitem resolvers
    * PR #3222: Fix #3214.  Mishandling of POP_BLOCK in while True loop.
    * PR #3230: Fixes liveness analysis issue in looplifting
    * PR #3233: Fix return type difference for 32bit ctypes.c_void_p
    * PR #3234: Fix types and layout for `np.where`.
    * PR #3237: Fix DeprecationWarning about imp module
    * PR #3241: Fix #3225.  Normalize 0nd array to scalar in typing of indexing code.
    * PR #3256: Fix #3251: Move imports of ABCs to collections.abc for Python >= 3.3
    * PR #3292: Fix issue3279.
    * PR #3302: Fix error due to mismatching dtype
  + Documentation Updates:
    * PR #3104: Workaround for #3098 (test_optional_unpack Heisenbug)
    * PR #3132: Adds an ~5 minute guide to Numba.
    * PR #3194: Fix docs RE: np.random generator fork/thread safety
    * PR #3242: Page with Numba talks and tutorial links
    * PR #3258: Allow users to choose the type of issue they are reporting.
    * PR #3260: Fixed broken link
    * PR #3266: Fix cuda pointer ownership problem with user/externally allocated pointer
    * PR #3269: Tweak typography with CSS
    * PR #3270: Update FAQ for functions passed as arguments
    * PR #3274: Update installation instructions
    * PR #3275: Note pyobject and voidptr are types in docs
    * PR #3288: Do not need to call parallel optimizations "experimental" anymore
    * PR #3318: Tweak spacing to avoid search box wrapping onto second line
- Remove upstream-included numba-0.39.0-fix-3135.patch

OBS-URL: https://build.opensuse.org/request/show/644953
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:numeric/python-numba?expand=0&rev=5
2018-10-26 20:02:59 +00:00

711 lines
32 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

-------------------------------------------------------------------
Fri Oct 26 19:45:47 UTC 2018 - Todd R <toddrme2178@gmail.com>
- Update to Version 0.40.1
* PR #3338: Accidentally left Anton off contributor list for 0.40.0
* PR #3374: Disable OpenMP in wheel building
* PR #3376: Update 0.40.1 changelog and docs on OpenMP backend
- Update to Version 0.40.0
+ This release adds a number of major features:
* A new GPU backend: kernels for AMD GPUs can now be compiled using the ROCm
driver on Linux.
* The thread pool implementation used by Numba for automatic multithreading
is configurable to use TBB, OpenMP, or the old "workqueue" implementation.
(TBB is likely to become the preferred default in a future release.)
* New documentation on thread and fork-safety with Numba, along with overall
improvements in thread-safety.
* Experimental support for executing a block of code inside a nopython mode
function in object mode.
* Parallel loops now allow arrays as reduction variables
* CUDA improvements: FMA, faster float64 atomics on supporting hardware,
records in const memory, and improved datatime dtype support
* More NumPy functions: vander, tri, triu, tril, fill_diagonal
+ General Enhancements:
* PR #3017: Add facility to support with-contexts
* PR #3033: Add support for multidimensional CFFI arrays
* PR #3122: Add inliner to object mode pipeline
* PR #3127: Support for reductions on arrays.
* PR #3145: Support for np.fill_diagonal
* PR #3151: Keep a queue of references to last N deserialized functions. Fixes #3026
* PR #3154: Support use of list() if typeable.
* PR #3166: Objmode with-block
* PR #3179: Updates for llvmlite 0.25
* PR #3181: Support function extension in alias analysis
* PR #3189: Support literal constants in typing of object methods
* PR #3190: Support passing closures as literal values in typing
* PR #3199: Support inferring stencil index as constant in simple unary expressions
* PR #3202: Threading layer backend refactor/rewrite/reinvention!
* PR #3209: Support for np.tri, np.tril and np.triu
* PR #3211: Handle unpacking in building tuple (BUILD_TUPLE_UNPACK opcode)
* PR #3212: Support for np.vander
* PR #3227: Add NumPy 1.15 support
* PR #3272: Add MemInfo_data to runtime._nrt_python.c_helpers
* PR #3273: Refactor. Removing thread-local-storage based context nesting.
* PR #3278: compiler threadsafety lockdown
* PR #3291: Add CPU count and CFS restrictions info to numba -s.
+ CUDA Enhancements:
* PR #3152: Use cuda driver api to get best blocksize for best occupancy
* PR #3165: Add FMA intrinsic support
* PR #3172: Use float64 add Atomics, Where Available
* PR #3186: Support Records in CUDA Const Memory
* PR #3191: CUDA: fix log size
* PR #3198: Fix GPU datetime timedelta types usage
* PR #3221: Support datetime/timedelta scalar argument to a CUDA kernel.
* PR #3259: Add DeviceNDArray.view method to reinterpret data as a different type.
* PR #3310: Fix IPC handling of sliced cuda array.
+ ROCm Enhancements:
* PR #3023: Support for AMDGCN/ROCm.
* PR #3108: Add ROC info to `numba -s` output.
* PR #3176: Move ROC vectorize init to npyufunc
* PR #3177: Add auto_synchronize support to ROC stream
* PR #3178: Update ROC target documentation.
* PR #3294: Add compiler lock to ROC compilation path.
* PR #3280: Add wavebits property to the HSA Agent.
* PR #3281: Fix ds_permute types and add tests
+ Continuous Integration / Testing:
* PR #3091: Remove old recipes, switch to test config based on env var.
* PR #3094: Add higher ULP tolerance for products in complex space.
* PR #3096: Set exit on error in incremental scripts
* PR #3109: Add skip to test needing jinja2 if no jinja2.
* PR #3125: Skip cudasim only tests
* PR #3126: add slack, drop flowdock
* PR #3147: Improve error message for arg type unsupported during typing.
* PR #3128: Fix recipe/build for jetson tx2/ARM
* PR #3167: In build script activate env before installing.
* PR #3180: Add skip to broken test.
* PR #3216: Fix libcuda.so loading in some container setup
* PR #3224: Switch to new Gitter notification webhook URL and encrypt it
* PR #3235: Add 32bit Travis CI jobs
* PR #3257: This adds scipy/ipython back into windows conda test phase.
+ Fixes:
* PR #3038: Fix random integer generation to match results from NumPy.
* PR #3045: Fix #3027 - Numba reassigns sys.stdout
* PR #3059: Handler for known LoweringErrors.
* PR #3060: Adjust attribute error for NumPy functions.
* PR #3067: Abort simulator threads on exception in thread block.
* PR #3079: Implement +/-(types.boolean) Fix #2624
* PR #3080: Compute np.var and np.std correctly for complex types.
* PR #3088: Fix #3066 (array.dtype.type in prange)
* PR #3089: Fix invalid ParallelAccelerator hoisting issue.
* PR #3136: Fix #3135 (lowering error)
* PR #3137: Fix for issue3103 (race condition detection)
* PR #3142: Fix Issue #3139 (parfors reuse of reduction variable across prange blocks)
* PR #3148: Remove dead array equal @infer code
* PR #3153: Fix canonicalize_array_math typing for calls with kw args
* PR #3156: Fixes issue with missing pygments in testing and adds guards.
* PR #3168: Py37 bytes output fix.
* PR #3171: Fix #3146. Fix CFUNCTYPE void* return-type handling
* PR #3193: Fix setitem/getitem resolvers
* PR #3222: Fix #3214. Mishandling of POP_BLOCK in while True loop.
* PR #3230: Fixes liveness analysis issue in looplifting
* PR #3233: Fix return type difference for 32bit ctypes.c_void_p
* PR #3234: Fix types and layout for `np.where`.
* PR #3237: Fix DeprecationWarning about imp module
* PR #3241: Fix #3225. Normalize 0nd array to scalar in typing of indexing code.
* PR #3256: Fix #3251: Move imports of ABCs to collections.abc for Python >= 3.3
* PR #3292: Fix issue3279.
* PR #3302: Fix error due to mismatching dtype
+ Documentation Updates:
* PR #3104: Workaround for #3098 (test_optional_unpack Heisenbug)
* PR #3132: Adds an ~5 minute guide to Numba.
* PR #3194: Fix docs RE: np.random generator fork/thread safety
* PR #3242: Page with Numba talks and tutorial links
* PR #3258: Allow users to choose the type of issue they are reporting.
* PR #3260: Fixed broken link
* PR #3266: Fix cuda pointer ownership problem with user/externally allocated pointer
* PR #3269: Tweak typography with CSS
* PR #3270: Update FAQ for functions passed as arguments
* PR #3274: Update installation instructions
* PR #3275: Note pyobject and voidptr are types in docs
* PR #3288: Do not need to call parallel optimizations "experimental" anymore
* PR #3318: Tweak spacing to avoid search box wrapping onto second line
- Remove upstream-included numba-0.39.0-fix-3135.patch
-------------------------------------------------------------------
Fri Jul 20 13:09:58 UTC 2018 - mcepl@suse.com
- Add patch numba-0.39.0-fix-3135.patch to make not fail datashader
tests. (https://github.com/bokeh/datashader/issues/620)
-------------------------------------------------------------------
Fri Jul 13 09:20:32 UTC 2018 - tchvatal@suse.com
- Fix version requirement to ask for new llvmlite
-------------------------------------------------------------------
Thu Jul 12 03:31:08 UTC 2018 - arun@gmx.de
- update to version 0.39.0:
* Here are the highlights for the Numba 0.39.0 release.
+ This is the first version that supports Python 3.7.
+ With help from Intel, we have fixed the issues with SVML support
(related issues #2938, #2998, #3006).
+ List has gained support for containing reference-counted types
like NumPy arrays and `list`. Note, list still cannot hold
heterogeneous types.
+ We have made a significant change to the internal
calling-convention, which should be transparent to most users,
to allow for a future feature that will permitting jumping back
into python-mode from a nopython-mode function. This also fixes
a limitation to `print` that disabled its use from nopython
functions that were deep in the call-stack.
+ For CUDA GPU support, we added a `__cuda_array_interface__`
following the NumPy array interface specification to allow Numba
to consume externally defined device arrays. We have opened a
corresponding pull request to CuPy to test out the concept and
be able to use a CuPy GPU array.
+ The Numba dispatcher `inspect_types()` method now supports the
kwarg `pretty` which if set to `True` will produce ANSI/HTML
output, showing the annotated types, when invoked from
ipython/jupyter-notebook respectively.
+ The NumPy functions `ndarray.dot`, `np.percentile` and
`np.nanpercentile`, and `np.unique` are now supported.
+ Numba now supports the use of a per-project configuration file
to permanently set behaviours typically set via `NUMBA_*` family
environment variables.
+ Support for the `ppc64le` architecture has been added.
* Enhancements:
+ PR #2793: Simplify and remove javascript from html_annotate
templates.
+ PR #2840: Support list of refcounted types
+ PR #2902: Support for np.unique
+ PR #2926: Enable fence for all architecture and add developer
notes
+ PR #2928: Making error about untyped list more informative.
+ PR #2930: Add configuration file and color schemes.
+ PR #2932: Fix encoding to 'UTF-8' in `check_output` decode.
+ PR #2938: Python 3.7 compat: _Py_Finalizing becomes
_Py_IsFinalizing()
+ PR #2939: Comprehensive SVML unit test
+ PR #2946: Add support for `ndarray.dot` method and tests.
+ PR #2953: percentile and nanpercentile
+ PR #2957: Add new 3.7 opcode support.
+ PR #2963: Improve alias analysis to be more comprehensive
+ PR #2984: Support for namedtuples in array analysis
+ PR #2986: Fix environment propagation
+ PR #2990: Improve function call matching for intrinsics
+ PR #3002: Second pass at error rewrites (interpreter errors).
+ PR #3004: Add numpy.empty to the list of pure functions.
+ PR #3008: Augment SVML detection with llvmlite SVML patch
detection.
+ PR #3012: Make use of the common spelling of
heterogeneous/homogeneous.
+ PR #3032: Fix pycc ctypes test due to mismatch in
calling-convention
+ PR #3039: Add SVML detection to Numba environment diagnostic
tool.
+ PR #3041: This adds @needs_blas to tests that use BLAS
+ PR #3056: Require llvmlite>=0.24.0
* CUDA Enhancements:
+ PR #2860: __cuda_array_interface__
+ PR #2910: More CUDA intrinsics
+ PR #2929: Add Flag To Prevent Unneccessary D->H Copies
+ PR #3037: Add CUDA IPC support on non-peer-accessible devices
* CI Enhancements:
+ PR #3021: Update appveyor config.
+ PR #3040: Add fault handler to all builds
+ PR #3042: Add catchsegv
+ PR #3077: Adds optional number of processes for `-m` in testing
* Fixes:
+ PR #2897: Fix line position of delete statement in numba ir
+ PR #2905: Fix for #2862
+ PR #3009: Fix optional type returning in recursive call
+ PR #3019: workaround and unittest for issue #3016
+ PR #3035: [TESTING] Attempt delayed removal of Env
+ PR #3048: [WIP] Fix cuda tests failure on buildfarm
+ PR #3054: Make test work on 32-bit
+ PR #3062: Fix cuda.In freeing devary before the kernel launch
+ PR #3073: Workaround #3072
+ PR #3076: Avoid ignored exception due to missing globals at
interpreter teardown
* Documentation Updates:
+ PR #2966: Fix syntax in env var docs.
+ PR #2967: Fix typo in CUDA kernel layout example.
+ PR #2970: Fix docstring copy paste error.
-------------------------------------------------------------------
Sun Jun 24 01:05:37 UTC 2018 - arun@gmx.de
- update to version 0.38.1:
This is a critical bug fix release addressing:
https://github.com/numba/numba/issues/3006
The bug does not impact users using conda packages from Anaconda or Intel Python
Distribution (but it does impact conda-forge). It does not impact users of pip
using wheels from PyPI.
This only impacts a small number of users where:
* The ICC runtime (specifically libsvml) is present in the user's environment.
* The user is using an llvmlite statically linked against a version of LLVM
that has not been patched with SVML support.
* The platform is 64-bit.
The release fixes a code generation path that could lead to the production of
incorrect results under the above situation.
Fixes:
* PR #3007: Augment SVML detection with llvmlite SVML patch
detection.
-------------------------------------------------------------------
Fri May 18 08:06:59 UTC 2018 - tchvatal@suse.com
- Fix dependencies to match reality
- Add more items to make python2 build
-------------------------------------------------------------------
Sat May 12 16:21:24 UTC 2018 - arun@gmx.de
- update to version 0.38.0:
* highlights:
+ Numba (via llvmlite) is now backed by LLVM 6.0, general
vectorization is improved as a result. A significant long
standing LLVM bug that was causing corruption was also found and
fixed.
+ Further considerable improvements in vectorization are made
available as Numba now supports Intel's short vector math
library (SVML). Try it out with `conda install -c numba
icc_rt`.
+ CUDA 8.0 is now the minimum supported CUDA version.
* Other highlights include:
+ Bug fixes to `parallel=True` have enabled more vectorization
opportunities when using the ParallelAccelerator technology.
+ Much effort has gone into improving error reporting and the
general usability of Numba. This includes highlighted error
messages and performance tips documentation. Try it out with
`conda install colorama`.
+ A number of new NumPy functions are supported, `np.convolve`,
`np.correlate` `np.reshape`, `np.transpose`, `np.permutation`,
`np.real`, `np.imag`, and `np.searchsorted` now supports
the`side` kwarg. Further, `np.argsort` now supports the `kind`
kwarg with `quicksort` and `mergesort` available.
+ The Numba extension API has gained the ability operate more
easily with functions from Cython modules through the use of
`numba.extending.get_cython_function_address` to obtain function
addresses for direct use in `ctypes.CFUNCTYPE`.
+ Numba now allows the passing of jitted functions (and containers
of jitted functions) as arguments to other jitted functions.
+ The CUDA functionality has gained support for a larger selection
of bit manipulation intrinsics, also SELP, and has had a number
of bugs fixed.
+ Initial work to support the PPC64LE platform has been added,
full support is however waiting on the LLVM 6.0.1 release as it
contains critical patches not present in 6.0.0. It is hoped
that any remaining issues will be fixed in the next release.
+ The capacity for advanced users/compiler engineers to define
their own compilation pipelines.
-------------------------------------------------------------------
Mon Apr 23 14:55:41 UTC 2018 - toddrme2178@gmail.com
- Fix dependency versions
-------------------------------------------------------------------
Fri Mar 2 23:16:36 UTC 2018 - arun@gmx.de
- specfile:
* update required llvmlite version
- update to version 0.37.0:
* Misc enhancements:
+ PR #2627: Remove hacks to make llvmlite threadsafe
+ PR #2672: Add ascontiguousarray
+ PR #2678: Add Gitter badge
+ PR #2691: Fix #2690: add intrinsic to convert array to tuple
+ PR #2703: Test runner feature: failed-first and last-failed
+ PR #2708: Patch for issue #1907
+ PR #2732: Add support for array.fill
* Misc Fixes:
+ PR #2610: Fix #2606 lowering of optional.setattr
+ PR #2650: Remove skip for win32 cosine test
+ PR #2668: Fix empty_like from readonly arrays.
+ PR #2682: Fixes 2210, remove _DisableJitWrapper
+ PR #2684: Fix #2340, generator error yielding bool
+ PR #2693: Add travis-ci testing of NumPy 1.14, and also check on
Python 2.7
+ PR #2694: Avoid type inference failure due to a typing template
rejection
+ PR #2695: Update llvmlite version dependency.
+ PR #2696: Fix tuple indexing codegeneration for empty tuple
+ PR #2698: Fix #2697 by deferring deletion in the simplify_CFG
loop.
+ PR #2701: Small fix to avoid tempfiles being created in the
current directory
+ PR #2725: Fix 2481, LLVM IR parsing error due to mutated IR
+ PR #2726: Fix #2673: incorrect fork error msg.
+ PR #2728: Alternative to #2620. Remove dead code
ByteCodeInst.get.
+ PR #2730: Add guard for test needing SciPy/BLAS
* Documentation updates:
+ PR #2670: Update communication channels
+ PR #2671: Add docs about diagnosing loop vectorizer
+ PR #2683: Add docs on const arg requirements and on const mem
alloc
+ PR #2722: Add docs on numpy support in cuda
+ PR #2724: Update doc: warning about unsupported arguments
* ParallelAccelerator enhancements/fixes:
+ Parallel support for `np.arange` and `np.linspace`, also
`np.mean`, `np.std` and `np.var` are added. This was performed
as part of a general refactor and cleanup of the core ParallelAccelerator code.
+ PR #2674: Core pa
+ PR #2704: Generate Dels after parfor sequential lowering
+ PR #2716: Handle matching directly supported functions
* CUDA enhancements:
+ PR #2665: CUDA DeviceNDArray: Support numpy tranpose API
+ PR #2681: Allow Assigning to DeviceNDArrays
+ PR #2702: Make DummyArray do High Dimensional Reshapes
+ PR #2714: Use CFFI to Reuse Code
* CUDA fixes:
+ PR #2667: Fix CUDA DeviceNDArray slicing
+ PR #2686: Fix #2663: incorrect offset when indexing cuda array.
+ PR #2687: Ensure Constructed Stream Bound
+ PR #2706: Workaround for unexpected warp divergence due to
exception raising code
+ PR #2707: Fix regression: cuda test submodules not loading
properly in runtests
+ PR #2731: Use more challenging values in slice tests.
+ PR #2720: A quick testsuite fix to not run the new cuda testcase
in the multiprocess pool
-------------------------------------------------------------------
Thu Jan 11 19:25:55 UTC 2018 - toddrme2178@gmail.com
- Bump minimum llvmlite version.
-------------------------------------------------------------------
Thu Dec 21 18:33:16 UTC 2017 - arun@gmx.de
- update to version 0.36.2:
* PR #2645: Avoid CPython bug with "exec" in older 2.7.x.
* PR #2652: Add support for CUDA 9.
-------------------------------------------------------------------
Fri Dec 8 17:59:51 UTC 2017 - arun@gmx.de
- update to version 0.36.1:
* ParallelAccelerator features:
+ PR #2457: Stencil Computations in ParallelAccelerator
+ PR #2548: Slice and range fusion, parallelizing bitarray and
slice assignment
+ PR #2516: Support general reductions in ParallelAccelerator
* ParallelAccelerator fixes:
+ PR #2540: Fix bug #2537
+ PR #2566: Fix issue #2564.
+ PR #2599: Fix nested multi-dimensional parfor type inference
issue
+ PR #2604: Fixes for stencil tests and cmath sin().
+ PR #2605: Fixes issue #2603.
* PR #2568: Update for LLVM 5
* PR #2607: Fixes abort when getting address to
"nrt_unresolved_abort"
* PR #2615: Working towards conda build 3
* Misc fixes/enhancements:
+ PR #2534: Add tuple support to np.take.
+ PR #2551: Rebranding fix
+ PR #2552: relative doc links
+ PR #2570: Fix issue #2561, handle missing successor on loop exit
+ PR #2588: Fix #2555. Disable libpython.so linking on linux
+ PR #2601: Update llvmlite version dependency.
+ PR #2608: Fix potential cache file collision
+ PR #2612: Fix NRT test failure due to increased overhead when
running in coverage
+ PR #2619: Fix dubious pthread_cond_signal not in lock
+ PR #2622: Fix `np.nanmedian` for all NaN case.
+ PR #2633: Fix markdown in CONTRIBUTING.md
+ PR #2635: Make the dependency on compilers for AOT optional.
* CUDA support fixes:
+ PR #2523: Fix invalid cuda context in memory transfer calls in
another thread
+ PR #2575: Use CPU to initialize xoroshiro states for GPU
RNG. Fixes #2573
+ PR #2581: Fix cuda gufunc mishandling of scalar arg as array and
out argument
-------------------------------------------------------------------
Tue Oct 3 06:05:20 UTC 2017 - arun@gmx.de
- update to version 0.35.0:
* ParallelAccelerator:
+ PR #2400: Array comprehension
+ PR #2405: Support printing Numpy arrays
+ PR #2438: from Support more np.random functions in
ParallelAccelerator
+ PR #2482: Support for sum with axis in nopython mode.
+ PR #2487: Adding developer documentation for ParallelAccelerator
technology.
+ PR #2492: Core PA refactor adds assertions for broadcast
semantics
* ParallelAccelerator fixes:
+ PR #2478: Rename cfg before parfor translation (#2477)
+ PR #2479: Fix broken array comprehension tests on unsupported
platforms
+ PR #2484: Fix array comprehension test on win64
+ PR #2506: Fix for 32-bit machines.
* Additional features of note:
+ PR #2490: Implement np.take and ndarray.take
+ PR #2493: Display a warning if parallel=True is set but not
possible.
+ PR #2513: Add np.MachAr, np.finfo, np.iinfo
+ PR #2515: Allow environ overriding of cpu target and cpu
features.
* Misc fixes/enhancements:
+ PR #2455: add contextual information to runtime errors
+ PR #2470: Fixes #2458, poor performance in np.median
+ PR #2471: Ensure LLVM threadsafety in {g,}ufunc building.
+ PR #2494: Update doc theme
+ PR #2503: Remove hacky code added in 2482 and feature
enhancement
+ PR #2505: Serialise env mutation tests during multithreaded
testing.
+ PR #2520: Fix failing cpu-target override tests
* CUDA support fixes:
+ PR #2504: Enable CUDA toolkit version testing
+ PR #2509: Disable tests generating code unavailable in lower CC
versions.
+ PR #2511: Fix Windows 64 bit CUDA tests.
- changes from version 0.34.0:
* ParallelAccelerator features:
+ PR #2318: Transfer ParallelAccelerator technology to Numba
+ PR #2379: ParallelAccelerator Core Improvements
+ PR #2367: Add support for len(range(...))
+ PR #2369: List comprehension
+ PR #2391: Explicit Parallel Loop Support (prange)
* CUDA support enhancements:
+ PR #2377: New GPU reduction algorithm
* CUDA support fixes:
+ PR #2397: Fix #2393, always set alignment of cuda static memory
regions
* Misc Fixes:
+ PR #2373, Issue #2372: 32-bit compatibility fix for parfor
related code
+ PR #2376: Fix #2375 missing stdint.h for py2.7 vc9
+ PR #2378: Fix deadlock in parallel gufunc when kernel acquires
the GIL.
+ PR #2382: Forbid unsafe casting in bitwise operation
+ PR #2385: docs: fix Sphinx errors
+ PR #2396: Use 64-bit RHS operand for shift
+ PR #2404: Fix threadsafety logic issue in ufunc compilation
cache.
+ PR #2424: Ensure consistent iteration order of blocks for type
inference.
+ PR #2425: Guard code to prevent the use of parallel on win32 +
py27
+ PR #2426: Basic test for Enum member type recovery.
+ PR #2433: Fix up the parfors tests with respect to windows py2.7
+ PR #2442: Skip tests that need BLAS/LAPACK if scipy is not
available.
+ PR #2444: Add test for invalid array setitem
+ PR #2449: Make the runtime initialiser threadsafe
+ PR #2452: Skip CFG test on 64bit windows
* Misc Enhancements:
+ PR #2366: Improvements to IR utils
+ PR #2388: Update README.rst to indicate the proper version of
LLVM
+ PR #2394: Upgrade to llvmlite 0.19.*
+ PR #2395: Update llvmlite version to 0.19
+ PR #2406: Expose environment object to ufuncs
+ PR #2407: Expose environment object to target-context inside
lowerer
+ PR #2413: Add flags to pass through to conda build for buildbot
+ PR #2414: Add cross compile flags to local recipe
+ PR #2415: A few cleanups for rewrites
+ PR #2418: Add getitem support for Enum classes
+ PR #2419: Add support for returning enums in vectorize
+ PR #2421: Add copyright notice for Intel contributed files.
+ PR #2422: Patch code base to work with np 1.13 release
+ PR #2448: Adds in warning message when using parallel if
cache=True
+ PR #2450: Add test for keyword arg on .sum-like and .cumsum-like
array methods
- changes from version 0.33.0:
* There are also several enhancements to the CUDA GPU support:
+ A GPU random number generator based on xoroshiro128+ algorithm
is added. See details and examples in documentation.
+ @cuda.jit CUDA kernels can now call @jit and @njit CPU functions
and they will automatically be compiled as CUDA device
functions.
+ CUDA IPC memory API is exposed for sharing memory between
proceses. See usage details in documentation.
* Reference counting enhancements:
+ PR #2346, Issue #2345, #2248: Add extra refcount pruning after
inlining
+ PR #2349: Fix refct pruning not removing refct op with tail
call.
+ PR #2352, Issue #2350: Add refcount pruning pass for function
that does not need refcount
* CUDA support enhancements:
+ PR #2023: Supports CUDA IPC for device array
+ PR #2343, Issue #2335: Allow CPU jit decorated function to be
used as cuda device function
+ PR #2347: Add random number generator support for CUDA device
code
+ PR #2361: Update autotune table for CC: 5.3, 6.0, 6.1, 6.2
* Misc fixes:
+ PR #2362: Avoid test failure due to typing to int32 on 32-bit
platforms
+ PR #2359: Fixed nogil example that threw a TypeError when
executed.
+ PR #2357, Issue #2356: Fix fragile test that depends on how the
script is executed.
+ PR #2355: Fix cpu dispatcher referenced as attribute of another
module
+ PR #2354: Fixes an issue with caching when function needs NRT
and refcount pruning
+ PR #2342, Issue #2339: Add warnings to inspection when it is
used on unserialized cached code
+ PR #2329, Issue #2250: Better handling of missing op codes
* Misc enhancements:
+ PR #2360: Adds missing values in error mesasge interp.
+ PR #2353: Handle when get_host_cpu_features() raises
RuntimeError
+ PR #2351: Enable SVML for erf/erfc/gamma/lgamma/log2
+ PR #2344: Expose error_model setting in jit decorator
+ PR #2337: Align blocking terminate support for fork() with new
TBB version
+ PR #2336: Bump llvmlite version to 0.18
+ PR #2330: Core changes in PR #2318
-------------------------------------------------------------------
Wed May 3 18:23:09 UTC 2017 - toddrme2178@gmail.com
- update to version 0.32.0:
+ Improvements:
* PR #2322: Suppress test error due to unknown but consistent error with tgamma
* PR #2320: Update llvmlite dependency to 0.17
* PR #2308: Add details to error message on why cuda support is disabled.
* PR #2302: Add os x to travis
* PR #2294: Disable remove_module on MCJIT due to memory leak inside LLVM
* PR #2291: Split parallel tests and recycle workers to tame memory usage
* PR #2253: Remove the pointer-stuffing hack for storing meminfos in lists
+ Fixes:
* PR #2331: Fix a bug in the GPU array indexing
* PR #2326: Fix #2321 docs referring to non-existing function.
* PR #2316: Fixing more race-condition problems
* PR #2315: Fix #2314. Relax strict type check to allow optional type.
* PR #2310: Fix race condition due to concurrent compilation and cache loading
* PR #2304: Fix intrinsic 1st arg not a typing.Context as stated by the docs.
* PR #2287: Fix int64 atomic min-max
* PR #2286: Fix #2285 `@overload_method` not linking dependent libs
* PR #2303: Missing import statements to interval-example.rst
- Implement single-spec version
-------------------------------------------------------------------
Wed Feb 22 22:15:53 UTC 2017 - arun@gmx.de
- update to version 0.31.0:
* Improvements:
+ PR #2281: Update for numpy1.12
+ PR #2278: Add CUDA atomic.{max, min, compare_and_swap}
+ PR #2277: Add about section to conda recipies to identify
license and other metadata in Anaconda Cloud
+ PR #2271: Adopt itanium C++-style mangling for CPU and CUDA
targets
+ PR #2267: Add fastmath flags
+ PR #2261: Support dtype.type
+ PR #2249: Changes for llvm3.9
+ PR #2234: Bump llvmlite requirement to 0.16 and add
install_name_tool_fixer to mviewbuf for OS X
+ PR #2230: Add python3.6 to TravisCi
+ PR #2227: Enable caching for gufunc wrapper
+ PR #2170: Add debugging support
+ PR #2037: inspect_cfg() for easier visualization of the function
operation
* Fixes:
+ PR #2274: Fix nvvm ir patch in mishandling “load”
+ PR #2272: Fix breakage to cuda7.5
+ PR #2269: Fix caching of copy_strides kernel in cuda.reduce
+ PR #2265: Fix #2263: error when linking two modules with dynamic
globals
+ PR #2252: Fix path separator in test
+ PR #2246: Fix overuse of memory in some system with fork
+ PR #2241: Fix #2240: __module__ in dynamically created function
not a str
+ PR #2239: Fix fingerprint computation failure preventing
fallback
-------------------------------------------------------------------
Sun Jan 15 00:33:08 UTC 2017 - arun@gmx.de
- update to version 0.30.1:
* Fixes:
+ PR #2232: Fix name clashes with _Py_hashtable_xxx in Python 3.6.
* Improvements:
+ PR #2217: Add Intel TBB threadpool implementation for parallel
ufunc.
-------------------------------------------------------------------
Tue Jan 10 17:17:33 UTC 2017 - arun@gmx.de
- specfile:
* update copyright year
- update to version 0.30.0:
* Improvements:
+ PR #2209: Support Python 3.6.
+ PR #2175: Support np.trace(), np.outer() and np.kron().
+ PR #2197: Support np.nanprod().
+ PR #2190: Support caching for ufunc.
+ PR #2186: Add system reporting tool.
* Fixes:
+ PR #2214, Issue #2212: Fix memory error with ndenumerate and
flat iterators.
+ PR #2206, Issue #2163: Fix zip() consuming extra elements in
early exhaustion.
+ PR #2185, Issue #2159, #2169: Fix rewrite pass affecting objmode
fallback.
+ PR #2204, Issue #2178: Fix annotation for liftedloop.
+ PR #2203: Fix Appveyor segfault with Python 3.5.
+ PR #2202, Issue #2198: Fix target context not initialized when
loading from ufunc cache.
+ PR #2172, Issue #2171: Fix optional type unpacking.
+ PR #2189, Issue #2188: Disable freezing of big (>1MB) global
arrays.
+ PR #2180, Issue #2179: Fix invalid variable version in
looplifting.
+ PR #2156, Issue #2155: Fix divmod, floordiv segfault on CUDA.
-------------------------------------------------------------------
Fri Dec 2 21:07:51 UTC 2016 - jengelh@inai.de
- remove subjective words from description
-------------------------------------------------------------------
Sat Nov 5 17:53:40 UTC 2016 - arun@gmx.de
- update to version 0.29.0:
* Improvements:
+ PR #2130, #2137: Add type-inferred recursion with docs and
examples.
+ PR #2134: Add np.linalg.matrix_power.
+ PR #2125: Add np.roots.
+ PR #2129: Add np.linalg.{eigvals,eigh,eigvalsh}.
+ PR #2126: Add array-to-array broadcasting.
+ PR #2069: Add hstack and related functions.
+ PR #2128: Allow for vectorizing a jitted function. (thanks to
@dhirschfeld)
+ PR #2117: Update examples and make them test-able.
+ PR #2127: Refactor interpreter class and its results.
* Fixes:
+ PR #2149: Workaround MSVC9.0 SP1 fmod bug kb982107.
+ PR #2145, Issue #2009: Fixes kwargs for jitclass __init__
method.
+ PR #2150: Fix slowdown in objmode fallback.
+ PR #2050, Issue #1258: Fix liveness problem with some generator
loops.
+ PR #2072, Issue #1995: Right shift of unsigned LHS should be
logical.
+ PR #2115, Issue #1466: Fix inspect_types() error due to mangled
variable name.
+ PR #2119, Issue #2118: Fix array type created from record-dtype.
+ PR #2122, Issue #1808: Fix returning a generator due to
datamodel error.
-------------------------------------------------------------------
Fri Sep 23 23:38:02 UTC 2016 - toddrme2178@gmail.com
- Initial version