forked from pool/python-pytest-benchmark
* Fixed broken hooks handling on pytest 8.1 or later (the
TypeError: import_path() missing 1 required keyword-only
argument: 'consider_namespace_packages' issue). Unfortunately
this sets the minimum supported pytest version to 8.1.
* Fixed bad fixture check that broke down then nbmake was
enabled.
* Dropped support for now EOL Python 3.8. Also moved tests
suite to only test the latest pytest versions (8.3.x).
* Fix generate parametrize tests benchmark csv report errors
(issue #268). Contributed by Johnny Huang in #269.
* Added the --benchmark-time-unit cli option for overriding the
measurement unit used for display. Contributed by Tony Kuo in
#257.
* Fixes spelling in some help texts. Contributed by Eugeniy in
#267.
* Added new cprofile options: --benchmark-cprofile-loops=LOOPS
- previously profiling only ran the function once, this allow
customization. --benchmark-cprofile-top=COUNT - allows
showing more rows. --benchmark-cprofile-dump=[FILENAME-
PREFIX] - allows saving to a file (that you can load in
snakeviz, RunSnakeRun or other tools).
* --benchmark-cprofile-loops=LOOPS - previously profiling only
ran the function once, this allow customization.
* --benchmark-cprofile-top=COUNT - allows showing more rows.
* --benchmark-cprofile-dump=[FILENAME-PREFIX] - allows saving
to a file (that you can load in snakeviz, RunSnakeRun or
other tools).
* Removed hidden dependency on py.path (replaced with pathlib).
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python:pytest/python-pytest-benchmark?expand=0&rev=32
23 lines
590 B
Diff
23 lines
590 B
Diff
--- a/tests/test_normal.py
|
|
+++ b/tests/test_normal.py
|
|
@@ -5,6 +5,7 @@ Just to make sure the plugin doesn't cho
|
|
Yay, doctests!
|
|
|
|
"""
|
|
+import platform
|
|
import sys # noqa
|
|
import time
|
|
from functools import partial
|
|
@@ -20,7 +21,10 @@ def test_fast(benchmark):
|
|
assert result is None
|
|
|
|
if not benchmark.disabled:
|
|
- assert benchmark.stats.stats.min >= 0.000001
|
|
+ if '32' in platform.architecture()[0]:
|
|
+ assert benchmark.stats.stats.min >= 0.0000001
|
|
+ else:
|
|
+ assert benchmark.stats.stats.min >= 0.000001
|
|
|
|
|
|
def test_slow(benchmark):
|