1
0
forked from pool/python-Scrapy

Compare commits

57 Commits

Author SHA256 Message Date
7d24715d4d Accepting request 1318232 from devel:languages:python
- Update CVE-2025-6176.patch to reflect the latest changes upstream to
  the patch.
- Remove the CVE-2025-6176-testfile-bomb-br-64GiB.bin source, it's not
  needed anymore.
  gh#scrapy/scrapy#7134, bsc#1252945, CVE-2025-6176)

OBS-URL: https://build.opensuse.org/request/show/1318232
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=26
2025-11-18 14:33:55 +00:00
56db2db8d1 - Update CVE-2025-6176.patch to reflect the latest changes upstream to
the patch.
- Remove the CVE-2025-6176-testfile-bomb-br-64GiB.bin source, it's not
  needed anymore.
  gh#scrapy/scrapy#7134, bsc#1252945, CVE-2025-6176)

OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=53
2025-11-17 11:39:40 +00:00
911057633b Accepting request 1317343 from devel:languages:python
- Use libalternatives
- Use multibuild to run tests in a subpackage
- add upstream patch CVE-2025-6176.patch to mitigate brotli and
  deflate decompression bombs DoS.
  This patch adds a new bin test file that was added as a new source
  as CVE-2025-6176-testfile-bomb-br-64GiB.bin
  gh#scrapy/scrapy#7134, bsc#1252945, CVE-2025-6176)

OBS-URL: https://build.opensuse.org/request/show/1317343
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=25
2025-11-12 20:16:06 +00:00
1c9c9bf4a9 - Use libalternatives
- Use multibuild to run tests in a subpackage
- add upstream patch CVE-2025-6176.patch to mitigate brotli and
  deflate decompression bombs DoS.
  This patch adds a new bin test file that was added as a new source
  as CVE-2025-6176-testfile-bomb-br-64GiB.bin
  gh#scrapy/scrapy#7134, bsc#1252945, CVE-2025-6176)

OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=51
2025-11-12 13:13:03 +00:00
36b4bfaf0b Accepting request 1296688 from devel:languages:python
- Update to 2.13.3:
  * Changed the values for DOWNLOAD_DELAY (from 0 to 1) and
    CONCURRENT_REQUESTS_PER_DOMAIN (from 8 to 1) in the default project
    template.
  * Fixed several bugs in the engine initialization and exception handling
    logic.
  * Allowed running tests with Twisted 25.5.0+ again and fixed test failures
    with lxml 6.0.0.
  * Give callback requests precedence over start requests when priority
    values are the same.
  * The asyncio reactor is now enabled by default
  * Replaced start_requests() (sync) with start() (async) and changed how it
    is iterated.
  * Added the allow_offsite request meta key
  * Spider middlewares that don't support asynchronous spider output are
    deprecated
  * Added a base class for universal spider middlewares
- Add patch remove-hoverxref.patch:
  * Do not use deprecated sphinx-hoverxref extension.
- Add patch no-dark-mode.patch:
  * Do not use unavailable sphinx-rtd-dark-mode extension.

OBS-URL: https://build.opensuse.org/request/show/1296688
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=24
2025-07-31 15:47:02 +00:00
dba2d7540a skip another test
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=49
2025-07-31 05:18:58 +00:00
406916eda8 - Update to 2.13.3:
* Changed the values for DOWNLOAD_DELAY (from 0 to 1) and
    CONCURRENT_REQUESTS_PER_DOMAIN (from 8 to 1) in the default project
    template.
  * Fixed several bugs in the engine initialization and exception handling
    logic.
  * Allowed running tests with Twisted 25.5.0+ again and fixed test failures
    with lxml 6.0.0.
  * Give callback requests precedence over start requests when priority
    values are the same.
  * The asyncio reactor is now enabled by default
  * Replaced start_requests() (sync) with start() (async) and changed how it
    is iterated.
  * Added the allow_offsite request meta key
  * Spider middlewares that don't support asynchronous spider output are
    deprecated
  * Added a base class for universal spider middlewares
- Add patch remove-hoverxref.patch:
  * Do not use deprecated sphinx-hoverxref extension.
- Add patch no-dark-mode.patch:
  * Do not use unavailable sphinx-rtd-dark-mode extension.

OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=48
2025-07-31 04:43:53 +00:00
0d3dbc2801 Accepting request 1264848 from devel:languages:python
- Normalize metadata directory name.

Requires python-setuptools 78 to build successfully.

OBS-URL: https://build.opensuse.org/request/show/1264848
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=23
2025-04-16 18:38:35 +00:00
a7924cbd08 - Normalize metadata directory name.
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=46
2025-03-27 05:46:50 +00:00
f2ecbd1d47 Accepting request 1227933 from devel:languages:python
- Update to 2.12.0:
  * Dropped support for Python 3.8, added support for Python 3.13
  * start_requests can now yield items
  * Added scrapy.http.JsonResponse
  * Added the CLOSESPIDER_PAGECOUNT_NO_ITEM setting

OBS-URL: https://build.opensuse.org/request/show/1227933
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=22
2024-12-03 19:47:04 +00:00
d580aa635a - Update to 2.12.0:
* Dropped support for Python 3.8, added support for Python 3.13
  * start_requests can now yield items
  * Added scrapy.http.JsonResponse
  * Added the CLOSESPIDER_PAGECOUNT_NO_ITEM setting

OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=44
2024-12-03 08:25:27 +00:00
f56239b802 Accepting request 1186841 from devel:languages:python
- update to 2.11.2 (bsc#1224474, CVE-2024-1968):
  * Redirects to non-HTTP protocols are no longer followed.
    Please, see the 23j4-mw76-5v7h security advisory for more
    information. (:issue:`457`)
  * The Authorization header is now dropped on redirects to a
    different scheme (http:// or https://) or port, even if the
    domain is the same. Please, see the 4qqq-9vqf-3h3f security
    advisory for more information.
  * When using system proxy settings that are different for
    http:// and https://, redirects to a different URL scheme
    will now also trigger the corresponding change in proxy
    settings for the redirected request. Please, see the
    jm3v-qxmh-hxwv security advisory for more information.
    (:issue:`767`)
  * :attr:`Spider.allowed_domains
    <scrapy.Spider.allowed_domains>` is now enforced for all
    requests, and not only requests from spider callbacks.
  * :func:`~scrapy.utils.iterators.xmliter_lxml` no longer
    resolves XML entities.
  * defusedxml is now used to make
    :class:`scrapy.http.request.rpc.XmlRpcRequest` more secure.
  * Restored support for brotlipy_, which had been dropped in
    Scrapy 2.11.1 in favor of brotli. (:issue:`6261`)  Note
    brotlipy is deprecated, both in Scrapy and upstream. Use
    brotli instead if you can.
  * Make :setting:`METAREFRESH_IGNORE_TAGS` ["noscript"] by
    default. This prevents :class:`~scrapy.downloadermiddlewares.
    redirect.MetaRefreshMiddleware` from following redirects that
    would not be followed by web browsers with JavaScript
    enabled.

OBS-URL: https://build.opensuse.org/request/show/1186841
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=21
2024-07-11 18:33:34 +00:00
053a125313 Accepting request 1164153 from devel:languages:python
Automatic submission by obs-autosubmit

OBS-URL: https://build.opensuse.org/request/show/1164153
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=20
2024-04-03 15:19:30 +00:00
542431ad3b Accepting request 1161494 from devel:languages:python
- update to 2.11.1 (bsc#1220514, CVE-2024-1892):
  * Addressed `ReDoS vulnerabilities` (bsc#1220514, CVE-2024-1892)
    -  ``scrapy.utils.iterators.xmliter`` is now deprecated in favor of
       :func:`~scrapy.utils.iterators.xmliter_lxml`, which
       :class:`~scrapy.spiders.XMLFeedSpider` now uses.
       To minimize the impact of this change on existing code,
       :func:`~scrapy.utils.iterators.xmliter_lxml` now supports indicating
       the node namespace with a prefix in the node name, and big files with
       highly nested trees when using libxml2 2.7+.
    -  Fixed regular expressions in the implementation of the
       :func:`~scrapy.utils.response.open_in_browser` function.
      .. _ReDoS vulnerabilities: https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
  *  :setting:`DOWNLOAD_MAXSIZE` and :setting:`DOWNLOAD_WARNSIZE` now also apply
     to the decompressed response body. Please, see the `7j7m-v7m3-jqm7 security
     advisory`_ for more information.
     .. _7j7m-v7m3-jqm7 security advisory: https://github.com/scrapy/scrapy/security/advisories/GHSA-7j7m-v7m3-jqm7
  *  Also in relation with the `7j7m-v7m3-jqm7 security advisory`_, the
     deprecated ``scrapy.downloadermiddlewares.decompression`` module has been
     removed.
  *  The ``Authorization`` header is now dropped on redirects to a different
     domain. Please, see the `cw9j-q3vf-hrrv security advisory`_ for more
     information.
  *  The OS signal handling code was refactored to no longer use private Twisted
      functions. (:issue:`6024`, :issue:`6064`, :issue:`6112`)
  *  Improved documentation for :class:`~scrapy.crawler.Crawler` initialization
     changes made in the 2.11.0 release. (:issue:`6057`, :issue:`6147`)
  *  Extended documentation for :attr:`Request.meta <scrapy.http.Request.meta>`.
  *  Fixed the :reqmeta:`dont_merge_cookies` documentation. (:issue:`5936`,
  *  Added a link to Zyte's export guides to the :ref:`feed exports
  *  Added a missing note about backward-incompatible changes in

OBS-URL: https://build.opensuse.org/request/show/1161494
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=19
2024-03-27 19:41:53 +00:00
OBS User buildservice-autocommit
04481ebc46 baserev update by copy to link target
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=39
2024-03-27 19:41:53 +00:00
OBS User buildservice-autocommit
3087fe5d77 Updating link to change in openSUSE:Factory/python-Scrapy revision 19
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=c8618efefe3e306c402cf2ae54ee2e71
2024-03-27 19:41:53 +00:00
3df144e0aa - update to 2.11.1 (bsc#1220514, CVE-2024-1892, bsc#1221986):
advisory`_ for more information. (bsc#1221986)

OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=38
2024-03-26 15:10:26 +00:00
119328cdce - update to 2.11.1 (bsc#1220514, CVE-2024-1892):
* Addressed `ReDoS vulnerabilities` (bsc#1220514, CVE-2024-1892)
    -  ``scrapy.utils.iterators.xmliter`` is now deprecated in favor of
       :func:`~scrapy.utils.iterators.xmliter_lxml`, which
       :class:`~scrapy.spiders.XMLFeedSpider` now uses.
       To minimize the impact of this change on existing code,
       :func:`~scrapy.utils.iterators.xmliter_lxml` now supports indicating
       the node namespace with a prefix in the node name, and big files with
       highly nested trees when using libxml2 2.7+.
    -  Fixed regular expressions in the implementation of the
       :func:`~scrapy.utils.response.open_in_browser` function.
      .. _ReDoS vulnerabilities: https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
  *  :setting:`DOWNLOAD_MAXSIZE` and :setting:`DOWNLOAD_WARNSIZE` now also apply
     to the decompressed response body. Please, see the `7j7m-v7m3-jqm7 security
     advisory`_ for more information.
     .. _7j7m-v7m3-jqm7 security advisory: https://github.com/scrapy/scrapy/security/advisories/GHSA-7j7m-v7m3-jqm7
  *  Also in relation with the `7j7m-v7m3-jqm7 security advisory`_, the
     deprecated ``scrapy.downloadermiddlewares.decompression`` module has been
     removed.
  *  The ``Authorization`` header is now dropped on redirects to a different
     domain. Please, see the `cw9j-q3vf-hrrv security advisory`_ for more
     information.
  *  The OS signal handling code was refactored to no longer use private Twisted
      functions. (:issue:`6024`, :issue:`6064`, :issue:`6112`)
  *  Improved documentation for :class:`~scrapy.crawler.Crawler` initialization
     changes made in the 2.11.0 release. (:issue:`6057`, :issue:`6147`)
  *  Extended documentation for :attr:`Request.meta <scrapy.http.Request.meta>`.
  *  Fixed the :reqmeta:`dont_merge_cookies` documentation. (:issue:`5936`,
  *  Added a link to Zyte's export guides to the :ref:`feed exports
  *  Added a missing note about backward-incompatible changes in

OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=37
2024-03-25 15:36:37 +00:00
0ad62694dd Accepting request 1137882 from devel:languages:python
- Add patch twisted-23.8.0-compat.patch gh#scrapy/scrapy#6064
- Update to 2.11.0:
  - Spiders can now modify settings in their from_crawler methods,
    e.g. based on spider arguments.
  - Periodic logging of stats.
  - Bug fixes.
- 2.10.0:
  - Added Python 3.12 support, dropped Python 3.7 support.
  - The new add-ons framework simplifies configuring 3rd-party
    components that support it.
  - Exceptions to retry can now be configured.
  - Many fixes and improvements for feed exports.
- 2.9.0:
  - Per-domain download settings.
  - Compatibility with new cryptography and new parsel.
  - JMESPath selectors from the new parsel.
  - Bug fixes.
- 2.8.0:
  - This is a maintenance release, with minor features, bug fixes, and
    cleanups.

OBS-URL: https://build.opensuse.org/request/show/1137882
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=18
2024-01-10 20:52:52 +00:00
feb6ce6077 - Disable flaky test
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=35
2024-01-10 08:44:17 +00:00
f93a35cd30 - Add patch twisted-23.8.0-compat.patch gh#scrapy/scrapy#6064
- Update to 2.11.0:
  - Spiders can now modify settings in their from_crawler methods,
    e.g. based on spider arguments.
  - Periodic logging of stats.
  - Bug fixes.
- 2.10.0:
  - Added Python 3.12 support, dropped Python 3.7 support.
  - The new add-ons framework simplifies configuring 3rd-party
    components that support it.
  - Exceptions to retry can now be configured.
  - Many fixes and improvements for feed exports.
- 2.9.0:
  - Per-domain download settings.
  - Compatibility with new cryptography and new parsel.
  - JMESPath selectors from the new parsel.
  - Bug fixes.
- 2.8.0:
  - This is a maintenance release, with minor features, bug fixes, and
    cleanups.

OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=34
2024-01-10 07:53:57 +00:00
7967a165bb Accepting request 1034478 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/1034478
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=17
2022-11-09 11:56:49 +00:00
47fd8f7029 Accepting request 1034369 from home:yarunachalam:branches:devel:languages:python
- Update to v2.7.1 
  * Relaxed the restriction introduced in 2.6.2 so that the Proxy-Authentication header can again be set explicitly in certain cases,
    restoring compatibility with scrapy-zyte-smartproxy 2.1.0 and older
  Bug fixes
  * full change-log https://docs.scrapy.org/en/latest/news.html#scrapy-2-7-1-2022-11-02

OBS-URL: https://build.opensuse.org/request/show/1034369
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=32
2022-11-08 09:53:03 +00:00
e6503c2be7 Accepting request 1032071 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/1032071
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=16
2022-10-29 18:16:47 +00:00
d9752627a9 Accepting request 1031641 from home:yarunachalam:branches:devel:languages:python
- Update to v2.7.0 
  Highlights:
  * Added Python 3.11 support, dropped Python 3.6 support
  * Improved support for :ref:`asynchronous callbacks <topics-coroutines>`
  * :ref:`Asyncio support <using-asyncio>` is enabled by default on new projects
  * Output names of item fields can now be arbitrary strings
  * Centralized :ref:`request fingerprinting <request-fingerprints>` configuration is now possible
  Modified requirements
  * Python 3.7 or greater is now required; support for Python 3.6 has been dropped. Support for the upcoming Python 3.11 has been added.
    The minimum required version of some dependencies has changed as well:
    - lxml: 3.5.0 → 4.3.0
    - Pillow (:ref:`images pipeline <images-pipeline>`): 4.0.0 → 7.1.0
    - zope.interface: 5.0.0 → 5.1.0
    (:issue:`5512`, :issue:`5514`, :issue:`5524`, :issue:`5563`, :issue:`5664`, :issue:`5670`, :issue:`5678`)
  Deprecations
    - :meth:`ImagesPipeline.thumb_path <scrapy.pipelines.images.ImagesPipeline.thumb_path>` must now accept an item parameter (:issue:`5504`, :issue:`5508`).
    - The scrapy.downloadermiddlewares.decompression module is now deprecated (:issue:`5546`, :issue:`5547`).
  
  Complete changelog https://github.com/scrapy/scrapy/blob/2.7/docs/news.rst

OBS-URL: https://build.opensuse.org/request/show/1031641
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=30
2022-10-28 22:27:39 +00:00
c966983550 Accepting request 1002736 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/1002736
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=15
2022-09-12 17:08:23 +00:00
a84bf5033f Accepting request 1002338 from home:yarunachalam:branches:devel:languages:python
- Update to v2.6.2 
  Security bug fix:
  * When HttpProxyMiddleware processes a request with proxy metadata, and that proxy metadata includes proxy credentials,
    HttpProxyMiddleware sets the Proxy-Authentication header, but only if that header is not already set.
  * There are third-party proxy-rotation downloader middlewares that set different proxy metadata every time they process a request.
  * Because of request retries and redirects, the same request can be processed by downloader middlewares more than once,
    including both HttpProxyMiddleware and any third-party proxy-rotation downloader middleware.
  * These third-party proxy-rotation downloader middlewares could change the proxy metadata of a request to a new value,
    but fail to remove the Proxy-Authentication header from the previous value of the proxy metadata, causing the credentials of one
    proxy to be sent to a different proxy.
  * To prevent the unintended leaking of proxy credentials, the behavior of HttpProxyMiddleware is now as follows when processing a request:
    + If the request being processed defines proxy metadata that includes credentials, the Proxy-Authorization header is always updated 
    to feature those credentials.
    + If the request being processed defines proxy metadata without credentials, the Proxy-Authorization header is removed unless
    it was originally defined for the same proxy URL.
    + To remove proxy credentials while keeping the same proxy URL, remove the Proxy-Authorization header.
    + If the request has no proxy metadata, or that metadata is a falsy value (e.g. None), the Proxy-Authorization header is removed.
    + It is no longer possible to set a proxy URL through the proxy metadata but set the credentials through the Proxy-Authorization header.
    Set proxy credentials through the proxy metadata instead.
  * Also fixes the following regressions introduced in 2.6.0:
    + CrawlerProcess supports again crawling multiple spiders (issue 5435, issue 5436)
    + Installing a Twisted reactor before Scrapy does (e.g. importing twisted.internet.reactor somewhere at the module level)
    no longer prevents Scrapy from starting, as long as a different reactor is not specified in TWISTED_REACTOR (issue 5525, issue 5528)
    + Fixed an exception that was being logged after the spider finished under certain conditions (issue 5437, issue 5440)
    + The --output/-o command-line parameter supports again a value starting with a hyphen (issue 5444, issue 5445)
    + The scrapy parse -h command no longer throws an error (issue 5481, issue 5482)

OBS-URL: https://build.opensuse.org/request/show/1002338
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=28
2022-09-12 08:00:07 +00:00
d418d5a4b7 Accepting request 959733 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/959733
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=14
2022-03-06 17:48:43 +00:00
bc00530500 Accepting request 959304 from home:bnavigator:branches:devel:languages:python
- Update runtime requirements and test deselections

OBS-URL: https://build.opensuse.org/request/show/959304
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=26
2022-03-06 16:31:19 +00:00
2e78e2e37d Accepting request 958587 from devel:languages:python
- Update to v2.6.1
  * Security fixes for cookie handling (CVE-2022-0577 aka
    bsc#1196638, GHSA-mfjm-vh54-3f96)
  * Python 3.10 support
  * asyncio support is no longer considered experimental, and works
    out-of-the-box on Windows regardless of your Python version
  * Feed exports now support pathlib.Path output paths and per-feed
    item filtering and post-processing
- Remove unnecessary patches:
  - remove-h2-version-restriction.patch
  - add-peak-method-to-queues.patch

OBS-URL: https://build.opensuse.org/request/show/958587
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=13
2022-03-03 23:17:11 +00:00
b1973a8506 Fix changelogs
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=24
2022-03-03 06:01:02 +00:00
7533e3a14a - Upgrade to 2.6.1:
- Remove unnecessary patches:
  - remove-h2-version-restriction.patch
  - add-peak-method-to-queues.patch

OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=23
2022-03-02 23:14:08 +00:00
548e19fdca Accepting request 946882 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/946882
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=12
2022-01-17 21:34:04 +00:00
edd1727cd3 Accepting request 946843 from home:bnavigator:branches:devel:languages:python
- Skip a failing test in python310: exception format not recognized

OBS-URL: https://build.opensuse.org/request/show/946843
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=22
2022-01-17 06:30:18 +00:00
e3d3aaef29 Accepting request 924057 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/924057
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=11
2021-10-07 22:06:30 +00:00
0e50613c60 Accepting request 923811 from home:bnavigator:branches:devel:languages:python
- Update to 2.5.1, Security bug fix
  * boo#1191446, CVE-2021-41125
  * If you use HttpAuthMiddleware (i.e. the http_user and
    http_pass spider attributes) for HTTP authentication,
    any request exposes your credentials to the request
    target.
  * To prevent unintended exposure of authentication
    credentials to unintended domains, you must now
    additionally set a new, additional spider attribute,
    http_auth_domain, and point it to the specific domain to
    which the authentication credentials must be sent.
  * If the http_auth_domain spider attribute is not set, the
    domain of the first request will be considered the HTTP
    authentication target, and authentication credentials
    will only be sent in requests targeting that domain.
  * If you need to send the same HTTP authentication
    credentials to multiple domains, you can use
    w3lib.http.basic_auth_header instead to set the value of
    the Authorization header of your requests.
  * If you really want your spider to send the same HTTP
    authentication credentials to any domain, set the
    http_auth_domain spider attribute to None.
  * Finally, if you are a user of scrapy-splash, know that
    this version of Scrapy breaks compatibility with
    scrapy-splash 0.7.2 and earlier. You will need to upgrade
    scrapy-splash to a greater version for it to continue to
    work.

OBS-URL: https://build.opensuse.org/request/show/923811
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=21
2021-10-07 16:57:39 +00:00
bacd15a4d3 Accepting request 917717 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/917717
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=10
2021-09-09 21:07:43 +00:00
ed9c5a3da0 Accepting request 917688 from home:fusionfuture:branches:devel:languages:python
- Remove h2 < 4.0 dependency version restriction. (boo#1190035)
  * remove-h2-version-restriction.patch
- Add peak method to queues to fix build with queuelib 1.6.2.
  * add-peak-method-to-queues.patch
- Drop support for Python 3.6 as python-uvloop does not support it.
- Require testfixtures >= 6.0.0 (tests need LogCapture.check_present).
  (2953bb4caa)

OBS-URL: https://build.opensuse.org/request/show/917688
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=20
2021-09-09 12:02:15 +00:00
59e61f6a88 Accepting request 889037 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/889037
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=9
2021-04-28 23:38:33 +00:00
278632db39 Accepting request 889030 from home:bnavigator:branches:devel:languages:python
- Update to 2.5.0:
  * Official Python 3.9 support
  * Experimental HTTP/2 support
  * New get_retry_request() function to retry requests from spider
    callbacks
  * New headers_received signal that allows stopping downloads
    early
  * New Response.protocol attribute
- Release 2.4.1:
  * Fixed feed exports overwrite support
  * Fixed the asyncio event loop handling, which could make code
    hang
  * Fixed the IPv6-capable DNS resolver CachingHostnameResolver
    for download handlers that call reactor.resolve
  * Fixed the output of the genspider command showing placeholders
    instead of the import part of the generated spider module
    (issue 4874)
- Release 2.4.0:
  * Python 3.5 support has been dropped.
  * The file_path method of media pipelines can now access the
    source item.
  * This allows you to set a download file path based on item data.
  * The new item_export_kwargs key of the FEEDS setting allows to
    define keyword parameters to pass to item exporter classes.
  * You can now choose whether feed exports overwrite or append to
    the output file.
  * For example, when using the crawl or runspider commands, you
    can use the -O option instead of -o to overwrite the output
    file.
  * Zstd-compressed responses are now supported if zstandard is
    installed.
  * In settings, where the import path of a class is required, it
    is now possible to pass a class object instead.
- Release 2.3.0:
  * Feed exports now support Google Cloud Storage as a storage
    backend
  * The new FEED_EXPORT_BATCH_ITEM_COUNT setting allows to deliver
    output items in batches of up to the specified number of items.
  * It also serves as a workaround for delayed file delivery,
    which causes Scrapy to only start item delivery after the
    crawl has finished when using certain storage backends (S3,
    FTP, and now GCS).
  * The base implementation of item loaders has been moved into a
    separate library, itemloaders, allowing usage from outside
    Scrapy and a separate release schedule
- Release 2.2.1:
  * The startproject command no longer makes unintended changes to
    the permissions of files in the destination folder, such as
    removing execution permissions.

OBS-URL: https://build.opensuse.org/request/show/889030
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=18
2021-04-28 13:47:21 +00:00
281dc651f3 Accepting request 819355 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/819355
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=8
2020-07-08 17:13:43 +00:00
Tomáš Chvátal
50bf9ddc84 Accepting request 818747 from home:winski:python
Update Scrapy to 2.2.0. NOTICE: Scrapy now requires a new package, made by the Scrapy team, called itemadapter. I have already packaged itemadapter & have submitted the package for approval. See Request 818656. Please accept python-itemadapter prior to accepting this request (otherwise build will fail). Thank you!

Changelog:
  * Python 3.5.2+ is required now
  * dataclass objects and attrs objects are now valid item types
  * New TextResponse.json method
  * New bytes_received signal that allows canceling response download
  * CookiesMiddleware fixes 
- Update to 2.1.0:
  * New FEEDS setting to export to multiple feeds
  * New Response.ip_address attribute
- Remove zope-exception-test_crawler.patch
- Add new required dependency python-itemadapter
- Omit test that fails in OBS due to https / tls issues

OBS-URL: https://build.opensuse.org/request/show/818747
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=16
2020-07-08 06:42:00 +00:00
769bc702cf Accepting request 807286 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/807286
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=7
2020-05-19 12:58:12 +00:00
Tomáš Chvátal
554a84a443 Accepting request 807242 from home:pgajdos:python
submit

OBS-URL: https://build.opensuse.org/request/show/807242
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=14
2020-05-19 12:14:06 +00:00
6dcfb7e77c Accepting request 790737 from devel:languages:python
- Update to 2.0.1:
  * Python 2 support has been removed
  * Partial coroutine syntax support and experimental asyncio support
  * New Response.follow_all method
  * FTP support for media pipelines
  * New Response.certificate attribute
  * IPv6 support through DNS_RESOLVER
  * Response.follow_all now supports an empty URL iterable as input
  * Removed top-level reactor imports to prevent errors about the wrong
    Twisted reactor being installed when setting a different Twisted
    reactor using TWISTED_REACTOR
- Add zope-exception-test_crawler.patch, rewriting one testcase to pass
  with our version of Zope.
- Update BuildRequires based on test requirements.

OBS-URL: https://build.opensuse.org/request/show/790737
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=6
2020-04-02 15:43:41 +00:00
5577430fb1 - Update to 2.0.1:
* Python 2 support has been removed
  * Partial coroutine syntax support and experimental asyncio support
  * New Response.follow_all method
  * FTP support for media pipelines
  * New Response.certificate attribute
  * IPv6 support through DNS_RESOLVER
  * Response.follow_all now supports an empty URL iterable as input
  * Removed top-level reactor imports to prevent errors about the wrong
    Twisted reactor being installed when setting a different Twisted
    reactor using TWISTED_REACTOR
- Add zope-exception-test_crawler.patch, rewriting one testcase to pass
  with our version of Zope.
- Update BuildRequires based on test requirements.

OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=12
2020-04-02 03:41:29 +00:00
f1afecd802 Accepting request 765023 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/765023
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=5
2020-01-16 17:24:00 +00:00
Tomáš Chvátal
535b71edfe Accepting request 765007 from home:mcalabkova:branches:devel:languages:python
- update to 1.8.0
  * Dropped Python 3.4 support and updated minimum requirements; 
    made Python 3.8 support official
  * lots of new fixes and features

OBS-URL: https://build.opensuse.org/request/show/765007
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=10
2020-01-16 15:35:55 +00:00
55d75182b8 Accepting request 725773 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/725773
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=4
2019-08-24 16:48:44 +00:00
Tomáš Chvátal
9f5d599005 Accepting request 725578 from home:polslinux:branches:devel:languages:python
- Update to 1.7.3
  * Enforce lxml 4.3.5 or lower for Python 3.4
    (issue 3912, issue 3918).
  * Fix Python 2 support (issue 3889, issue 3893, issue 3896).

OBS-URL: https://build.opensuse.org/request/show/725578
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=8
2019-08-24 07:38:09 +00:00
1cb1df21ea Accepting request 718179 from devel:languages:python
- Format with spec-cleaner
- Use just python3 version of Sphinx

- version update to 1.7.1
  * Improvements for crawls targeting multiple domains
  * A cleaner way to pass arguments to callbacks
  * A new class for JSON requests
  * Improvements for rule-based spiders
  * New features for feed exports
  see news.rst for details

OBS-URL: https://build.opensuse.org/request/show/718179
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=3
2019-07-24 18:36:43 +00:00
Tomáš Chvátal
ddab9089be - Format with spec-cleaner
- Use just python3 version of Sphinx

OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=6
2019-07-24 10:27:28 +00:00
Tomáš Chvátal
df0b142876 Accepting request 718147 from home:pgajdos
- version update to 1.7.1
  * Improvements for crawls targeting multiple domains
  * A cleaner way to pass arguments to callbacks
  * A new class for JSON requests
  * Improvements for rule-based spiders
  * New features for feed exports
  see news.rst for details

OBS-URL: https://build.opensuse.org/request/show/718147
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=5
2019-07-24 09:00:54 +00:00
e0bb11edf0 Accepting request 703535 from devel:languages:python
OBS-URL: https://build.opensuse.org/request/show/703535
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=2
2019-05-17 21:43:32 +00:00
Tomáš Chvátal
c4b3eee51a Accepting request 703534 from home:anandrit:branches:devel:languages:python
- Skip flaky CrawlerTestCase

OBS-URL: https://build.opensuse.org/request/show/703534
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=3
2019-05-16 19:37:41 +00:00
Stephan Kulow
62ba8354a5 Accepting request 677379 from devel:languages:python
New package

OBS-URL: https://build.opensuse.org/request/show/677379
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python-Scrapy?expand=0&rev=1
2019-02-28 20:41:23 +00:00
Tomáš Chvátal
8dfa7191b3 Accepting request 677210 from home:frispete:python
Now with requested mods...

OBS-URL: https://build.opensuse.org/request/show/677210
OBS-URL: https://build.opensuse.org/package/show/devel:languages:python/python-Scrapy?expand=0&rev=1
2019-02-19 14:19:30 +00:00
9 changed files with 805 additions and 19 deletions

613
CVE-2025-6176.patch Normal file
View File

@@ -0,0 +1,613 @@
Index: scrapy-2.13.3/conftest.py
===================================================================
--- scrapy-2.13.3.orig/conftest.py
+++ scrapy-2.13.3/conftest.py
@@ -116,6 +116,16 @@ def requires_boto3(request):
pytest.skip("boto3 is not installed")
+@pytest.fixture(autouse=True)
+def requires_mitmproxy(request):
+ if not request.node.get_closest_marker("requires_mitmproxy"):
+ return
+ try:
+ import mitmproxy # noqa: F401, PLC0415
+ except ImportError:
+ pytest.skip("mitmproxy is not installed")
+
+
def pytest_configure(config):
if config.getoption("--reactor") != "default":
install_reactor("twisted.internet.asyncioreactor.AsyncioSelectorReactor")
Index: scrapy-2.13.3/pyproject.toml
===================================================================
--- scrapy-2.13.3.orig/pyproject.toml
+++ scrapy-2.13.3/pyproject.toml
@@ -242,6 +242,7 @@ markers = [
"requires_uvloop: marks tests as only enabled when uvloop is known to be working",
"requires_botocore: marks tests that need botocore (but not boto3)",
"requires_boto3: marks tests that need botocore and boto3",
+ "requires_mitmproxy: marks tests that need mitmproxy",
]
filterwarnings = [
"ignore::DeprecationWarning:twisted.web.static"
Index: scrapy-2.13.3/scrapy/downloadermiddlewares/httpcompression.py
===================================================================
--- scrapy-2.13.3.orig/scrapy/downloadermiddlewares/httpcompression.py
+++ scrapy-2.13.3/scrapy/downloadermiddlewares/httpcompression.py
@@ -29,14 +29,20 @@ logger = getLogger(__name__)
ACCEPTED_ENCODINGS: list[bytes] = [b"gzip", b"deflate"]
try:
- try:
- import brotli # noqa: F401
- except ImportError:
- import brotlicffi # noqa: F401
+ import brotli
except ImportError:
pass
else:
- ACCEPTED_ENCODINGS.append(b"br")
+ try:
+ brotli.Decompressor.can_accept_more_data
+ except AttributeError: # pragma: no cover
+ warnings.warn(
+ "You have brotli installed. But 'br' encoding support now requires "
+ "brotli version >= 1.2.0. Please upgrade brotli version to make Scrapy "
+ "decode 'br' encoded responses.",
+ )
+ else:
+ ACCEPTED_ENCODINGS.append(b"br")
try:
import zstandard # noqa: F401
@@ -98,13 +104,13 @@ class HttpCompressionMiddleware:
decoded_body, content_encoding = self._handle_encoding(
response.body, content_encoding, max_size
)
- except _DecompressionMaxSizeExceeded:
+ except _DecompressionMaxSizeExceeded as e:
raise IgnoreRequest(
f"Ignored response {response} because its body "
- f"({len(response.body)} B compressed) exceeded "
- f"DOWNLOAD_MAXSIZE ({max_size} B) during "
- f"decompression."
- )
+ f"({len(response.body)} B compressed, "
+ f"{e.decompressed_size} B decompressed so far) exceeded "
+ f"DOWNLOAD_MAXSIZE ({max_size} B) during decompression."
+ ) from e
if len(response.body) < warn_size <= len(decoded_body):
logger.warning(
f"{response} body size after decompression "
@@ -187,7 +193,7 @@ class HttpCompressionMiddleware:
f"from unsupported encoding(s) '{encodings_str}'."
)
if b"br" in encodings:
- msg += " You need to install brotli or brotlicffi to decode 'br'."
+ msg += " You need to install brotli >= 1.2.0 to decode 'br'."
if b"zstd" in encodings:
msg += " You need to install zstandard to decode 'zstd'."
logger.warning(msg)
Index: scrapy-2.13.3/scrapy/utils/_compression.py
===================================================================
--- scrapy-2.13.3.orig/scrapy/utils/_compression.py
+++ scrapy-2.13.3/scrapy/utils/_compression.py
@@ -1,42 +1,9 @@
import contextlib
import zlib
from io import BytesIO
-from warnings import warn
-
-from scrapy.exceptions import ScrapyDeprecationWarning
-
-try:
- try:
- import brotli
- except ImportError:
- import brotlicffi as brotli
-except ImportError:
- pass
-else:
- try:
- brotli.Decompressor.process
- except AttributeError:
- warn(
- (
- "You have brotlipy installed, and Scrapy will use it, but "
- "Scrapy support for brotlipy is deprecated and will stop "
- "working in a future version of Scrapy. brotlipy itself is "
- "deprecated, it has been superseded by brotlicffi. Please, "
- "uninstall brotlipy and install brotli or brotlicffi instead. "
- "brotlipy has the same import name as brotli, so keeping both "
- "installed is strongly discouraged."
- ),
- ScrapyDeprecationWarning,
- )
-
- def _brotli_decompress(decompressor, data):
- return decompressor.decompress(data)
-
- else:
-
- def _brotli_decompress(decompressor, data):
- return decompressor.process(data)
+with contextlib.suppress(ImportError):
+ import brotli
with contextlib.suppress(ImportError):
import zstandard
@@ -46,62 +13,64 @@ _CHUNK_SIZE = 65536 # 64 KiB
class _DecompressionMaxSizeExceeded(ValueError):
- pass
+ def __init__(self, decompressed_size: int, max_size: int) -> None:
+ self.decompressed_size = decompressed_size
+ self.max_size = max_size
+
+ def __str__(self) -> str:
+ return (
+ f"The number of bytes decompressed so far "
+ f"({self.decompressed_size} B) exceeded the specified maximum "
+ f"({self.max_size} B)."
+ )
+
+
+def _check_max_size(decompressed_size: int, max_size: int) -> None:
+ if max_size and decompressed_size > max_size:
+ raise _DecompressionMaxSizeExceeded(decompressed_size, max_size)
def _inflate(data: bytes, *, max_size: int = 0) -> bytes:
decompressor = zlib.decompressobj()
- raw_decompressor = zlib.decompressobj(wbits=-15)
- input_stream = BytesIO(data)
+ try:
+ first_chunk = decompressor.decompress(data, max_length=_CHUNK_SIZE)
+ except zlib.error:
+ # to work with raw deflate content that may be sent by microsoft servers.
+ decompressor = zlib.decompressobj(wbits=-15)
+ first_chunk = decompressor.decompress(data, max_length=_CHUNK_SIZE)
+ decompressed_size = len(first_chunk)
+ _check_max_size(decompressed_size, max_size)
output_stream = BytesIO()
- output_chunk = b"."
- decompressed_size = 0
- while output_chunk:
- input_chunk = input_stream.read(_CHUNK_SIZE)
- try:
- output_chunk = decompressor.decompress(input_chunk)
- except zlib.error:
- if decompressor != raw_decompressor:
- # ugly hack to work with raw deflate content that may
- # be sent by microsoft servers. For more information, see:
- # http://carsten.codimi.de/gzip.yaws/
- # http://www.port80software.com/200ok/archive/2005/10/31/868.aspx
- # http://www.gzip.org/zlib/zlib_faq.html#faq38
- decompressor = raw_decompressor
- output_chunk = decompressor.decompress(input_chunk)
- else:
- raise
+ output_stream.write(first_chunk)
+ while decompressor.unconsumed_tail:
+ output_chunk = decompressor.decompress(
+ decompressor.unconsumed_tail, max_length=_CHUNK_SIZE
+ )
decompressed_size += len(output_chunk)
- if max_size and decompressed_size > max_size:
- raise _DecompressionMaxSizeExceeded(
- f"The number of bytes decompressed so far "
- f"({decompressed_size} B) exceed the specified maximum "
- f"({max_size} B)."
- )
+ _check_max_size(decompressed_size, max_size)
output_stream.write(output_chunk)
- output_stream.seek(0)
- return output_stream.read()
+ if tail := decompressor.flush():
+ decompressed_size += len(tail)
+ _check_max_size(decompressed_size, max_size)
+ output_stream.write(tail)
+ return output_stream.getvalue()
def _unbrotli(data: bytes, *, max_size: int = 0) -> bytes:
decompressor = brotli.Decompressor()
- input_stream = BytesIO(data)
+ first_chunk = decompressor.process(data, output_buffer_limit=_CHUNK_SIZE)
+ decompressed_size = len(first_chunk)
+ _check_max_size(decompressed_size, max_size)
output_stream = BytesIO()
- output_chunk = b"."
- decompressed_size = 0
- while output_chunk:
- input_chunk = input_stream.read(_CHUNK_SIZE)
- output_chunk = _brotli_decompress(decompressor, input_chunk)
+ output_stream.write(first_chunk)
+ while not decompressor.is_finished():
+ output_chunk = decompressor.process(b"", output_buffer_limit=_CHUNK_SIZE)
+ if not output_chunk:
+ break
decompressed_size += len(output_chunk)
- if max_size and decompressed_size > max_size:
- raise _DecompressionMaxSizeExceeded(
- f"The number of bytes decompressed so far "
- f"({decompressed_size} B) exceed the specified maximum "
- f"({max_size} B)."
- )
+ _check_max_size(decompressed_size, max_size)
output_stream.write(output_chunk)
- output_stream.seek(0)
- return output_stream.read()
+ return output_stream.getvalue()
def _unzstd(data: bytes, *, max_size: int = 0) -> bytes:
@@ -113,12 +82,6 @@ def _unzstd(data: bytes, *, max_size: in
while output_chunk:
output_chunk = stream_reader.read(_CHUNK_SIZE)
decompressed_size += len(output_chunk)
- if max_size and decompressed_size > max_size:
- raise _DecompressionMaxSizeExceeded(
- f"The number of bytes decompressed so far "
- f"({decompressed_size} B) exceed the specified maximum "
- f"({max_size} B)."
- )
+ _check_max_size(decompressed_size, max_size)
output_stream.write(output_chunk)
- output_stream.seek(0)
- return output_stream.read()
+ return output_stream.getvalue()
Index: scrapy-2.13.3/scrapy/utils/gz.py
===================================================================
--- scrapy-2.13.3.orig/scrapy/utils/gz.py
+++ scrapy-2.13.3/scrapy/utils/gz.py
@@ -5,7 +5,7 @@ from gzip import GzipFile
from io import BytesIO
from typing import TYPE_CHECKING
-from ._compression import _CHUNK_SIZE, _DecompressionMaxSizeExceeded
+from ._compression import _CHUNK_SIZE, _check_max_size
if TYPE_CHECKING:
from scrapy.http import Response
@@ -31,15 +31,9 @@ def gunzip(data: bytes, *, max_size: int
break
raise
decompressed_size += len(chunk)
- if max_size and decompressed_size > max_size:
- raise _DecompressionMaxSizeExceeded(
- f"The number of bytes decompressed so far "
- f"({decompressed_size} B) exceed the specified maximum "
- f"({max_size} B)."
- )
+ _check_max_size(decompressed_size, max_size)
output_stream.write(chunk)
- output_stream.seek(0)
- return output_stream.read()
+ return output_stream.getvalue()
def gzip_magic_number(response: Response) -> bool:
Index: scrapy-2.13.3/tests/test_downloadermiddleware_httpcompression.py
===================================================================
--- scrapy-2.13.3.orig/tests/test_downloadermiddleware_httpcompression.py
+++ scrapy-2.13.3/tests/test_downloadermiddleware_httpcompression.py
@@ -51,6 +51,22 @@ FORMAT = {
}
+def _skip_if_no_br() -> None:
+ try:
+ import brotli # noqa: PLC0415
+
+ brotli.Decompressor.can_accept_more_data
+ except (ImportError, AttributeError):
+ pytest.skip("no brotli support")
+
+
+def _skip_if_no_zstd() -> None:
+ try:
+ import zstandard # noqa: F401,PLC0415
+ except ImportError:
+ pytest.skip("no zstd support (zstandard)")
+
+
class TestHttpCompression:
def setup_method(self):
self.crawler = get_crawler(Spider)
@@ -124,13 +140,7 @@ class TestHttpCompression:
self.assertStatsEqual("httpcompression/response_bytes", 74837)
def test_process_response_br(self):
- try:
- try:
- import brotli # noqa: F401
- except ImportError:
- import brotlicffi # noqa: F401
- except ImportError:
- raise SkipTest("no brotli")
+ _skip_if_no_br()
response = self._getresponse("br")
request = response.request
assert response.headers["Content-Encoding"] == b"br"
@@ -143,14 +153,8 @@ class TestHttpCompression:
def test_process_response_br_unsupported(self):
try:
- try:
- import brotli # noqa: F401
-
- raise SkipTest("Requires not having brotli support")
- except ImportError:
- import brotlicffi # noqa: F401
-
- raise SkipTest("Requires not having brotli support")
+ import brotli # noqa: F401,PLC0415
+ pytest.skip("Requires not having brotli support")
except ImportError:
pass
response = self._getresponse("br")
@@ -169,7 +173,7 @@ class TestHttpCompression:
(
"HttpCompressionMiddleware cannot decode the response for"
" http://scrapytest.org/ from unsupported encoding(s) 'br'."
- " You need to install brotli or brotlicffi to decode 'br'."
+ " You need to install brotli >= 1.2.0 to decode 'br'."
),
),
)
@@ -503,24 +507,19 @@ class TestHttpCompression:
self.assertStatsEqual("httpcompression/response_bytes", None)
def _test_compression_bomb_setting(self, compression_id):
- settings = {"DOWNLOAD_MAXSIZE": 10_000_000}
+ settings = {"DOWNLOAD_MAXSIZE": 1_000_000}
crawler = get_crawler(Spider, settings_dict=settings)
spider = crawler._create_spider("scrapytest.org")
mw = HttpCompressionMiddleware.from_crawler(crawler)
mw.open_spider(spider)
- response = self._getresponse(f"bomb-{compression_id}")
- with pytest.raises(IgnoreRequest):
+ response = self._getresponse(f"bomb-{compression_id}") # 11_511_612 B
+ with pytest.raises(IgnoreRequest) as exc_info:
mw.process_response(response.request, response, spider)
+ assert exc_info.value.__cause__.decompressed_size < 1_100_000
def test_compression_bomb_setting_br(self):
- try:
- try:
- import brotli # noqa: F401
- except ImportError:
- import brotlicffi # noqa: F401
- except ImportError:
- raise SkipTest("no brotli")
+ _skip_if_no_br()
self._test_compression_bomb_setting("br")
def test_compression_bomb_setting_deflate(self):
@@ -538,7 +537,7 @@ class TestHttpCompression:
def _test_compression_bomb_spider_attr(self, compression_id):
class DownloadMaxSizeSpider(Spider):
- download_maxsize = 10_000_000
+ download_maxsize = 1_000_000
crawler = get_crawler(DownloadMaxSizeSpider)
spider = crawler._create_spider("scrapytest.org")
@@ -546,17 +545,12 @@ class TestHttpCompression:
mw.open_spider(spider)
response = self._getresponse(f"bomb-{compression_id}")
- with pytest.raises(IgnoreRequest):
+ with pytest.raises(IgnoreRequest) as exc_info:
mw.process_response(response.request, response, spider)
+ assert exc_info.value.__cause__.decompressed_size < 1_100_000
def test_compression_bomb_spider_attr_br(self):
- try:
- try:
- import brotli # noqa: F401
- except ImportError:
- import brotlicffi # noqa: F401
- except ImportError:
- raise SkipTest("no brotli")
+ _skip_if_no_br()
self._test_compression_bomb_spider_attr("br")
def test_compression_bomb_spider_attr_deflate(self):
@@ -579,18 +573,13 @@ class TestHttpCompression:
mw.open_spider(spider)
response = self._getresponse(f"bomb-{compression_id}")
- response.meta["download_maxsize"] = 10_000_000
- with pytest.raises(IgnoreRequest):
+ response.meta["download_maxsize"] = 1_000_000
+ with pytest.raises(IgnoreRequest) as exc_info:
mw.process_response(response.request, response, spider)
+ assert exc_info.value.__cause__.decompressed_size < 1_100_000
def test_compression_bomb_request_meta_br(self):
- try:
- try:
- import brotli # noqa: F401
- except ImportError:
- import brotlicffi # noqa: F401
- except ImportError:
- raise SkipTest("no brotli")
+ _skip_if_no_br()
self._test_compression_bomb_request_meta("br")
def test_compression_bomb_request_meta_deflate(self):
@@ -633,13 +622,7 @@ class TestHttpCompression:
)
def test_download_warnsize_setting_br(self):
- try:
- try:
- import brotli # noqa: F401
- except ImportError:
- import brotlicffi # noqa: F401
- except ImportError:
- raise SkipTest("no brotli")
+ _skip_if_no_br()
self._test_download_warnsize_setting("br")
def test_download_warnsize_setting_deflate(self):
@@ -684,13 +667,7 @@ class TestHttpCompression:
)
def test_download_warnsize_spider_attr_br(self):
- try:
- try:
- import brotli # noqa: F401
- except ImportError:
- import brotlicffi # noqa: F401
- except ImportError:
- raise SkipTest("no brotli")
+ _skip_if_no_br()
self._test_download_warnsize_spider_attr("br")
def test_download_warnsize_spider_attr_deflate(self):
@@ -733,13 +710,7 @@ class TestHttpCompression:
)
def test_download_warnsize_request_meta_br(self):
- try:
- try:
- import brotli # noqa: F401
- except ImportError:
- import brotlicffi # noqa: F401
- except ImportError:
- raise SkipTest("no brotli")
+ _skip_if_no_br()
self._test_download_warnsize_request_meta("br")
def test_download_warnsize_request_meta_deflate(self):
@@ -754,3 +725,34 @@ class TestHttpCompression:
except ImportError:
raise SkipTest("no zstd support (zstandard)")
self._test_download_warnsize_request_meta("zstd")
+
+ def _get_truncated_response(self, compression_id):
+ crawler = get_crawler(Spider)
+ spider = crawler._create_spider("scrapytest.org")
+ mw = HttpCompressionMiddleware.from_crawler(crawler)
+ mw.open_spider(spider)
+ response = self._getresponse(compression_id)
+ truncated_body = response.body[: len(response.body) // 2]
+ response = response.replace(body=truncated_body)
+ return mw.process_response(response.request, response, spider)
+
+ def test_process_truncated_response_br(self):
+ _skip_if_no_br()
+ resp = self._get_truncated_response("br")
+ assert resp.body.startswith(b"<!DOCTYPE")
+
+ def test_process_truncated_response_zlibdeflate(self):
+ resp = self._get_truncated_response("zlibdeflate")
+ assert resp.body.startswith(b"<!DOCTYPE")
+
+ def test_process_truncated_response_gzip(self):
+ resp = self._get_truncated_response("gzip")
+ assert resp.body.startswith(b"<!DOCTYPE")
+
+ def test_process_truncated_response_zstd(self):
+ _skip_if_no_zstd()
+ for check_key in FORMAT:
+ if not check_key.startswith("zstd-"):
+ continue
+ resp = self._get_truncated_response(check_key)
+ assert len(resp.body) == 0
Index: scrapy-2.13.3/tests/test_proxy_connect.py
===================================================================
--- scrapy-2.13.3.orig/tests/test_proxy_connect.py
+++ scrapy-2.13.3/tests/test_proxy_connect.py
@@ -62,6 +62,7 @@ def _wrong_credentials(proxy_url):
return urlunsplit(bad_auth_proxy)
+@pytest.mark.requires_mitmproxy
class TestProxyConnect(TestCase):
@classmethod
def setUpClass(cls):
@@ -73,13 +74,7 @@ class TestProxyConnect(TestCase):
cls.mockserver.__exit__(None, None, None)
def setUp(self):
- try:
- import mitmproxy # noqa: F401
- except ImportError:
- pytest.skip("mitmproxy is not installed")
-
self._oldenv = os.environ.copy()
-
self._proxy = MitmProxy()
proxy_url = self._proxy.start()
os.environ["https_proxy"] = proxy_url
Index: scrapy-2.13.3/tox.ini
===================================================================
--- scrapy-2.13.3.orig/tox.ini
+++ scrapy-2.13.3/tox.ini
@@ -112,9 +112,6 @@ deps =
w3lib==1.17.0
zope.interface==5.1.0
{[test-requirements]deps}
-
- # mitmproxy 8.0.0 requires upgrading some of the pinned dependencies
- # above, hence we do not install it in pinned environments at the moment
setenv =
_SCRAPY_PINNED=true
install_command =
@@ -141,8 +138,7 @@ deps =
Twisted[http2]
boto3
bpython # optional for shell wrapper tests
- brotli; implementation_name != "pypy" # optional for HTTP compress downloader middleware tests
- brotlicffi; implementation_name == "pypy" # optional for HTTP compress downloader middleware tests
+ brotli >= 1.2.0 # optional for HTTP compress downloader middleware tests
google-cloud-storage
ipython
robotexclusionrulesparser
@@ -156,9 +152,7 @@ deps =
Pillow==8.0.0
boto3==1.20.0
bpython==0.7.1
- brotli==0.5.2; implementation_name != "pypy"
- brotlicffi==0.8.0; implementation_name == "pypy"
- brotlipy
+ brotli==1.2.0
google-cloud-storage==1.29.0
ipython==2.0.0
robotexclusionrulesparser==1.6.2
@@ -258,7 +252,7 @@ deps =
{[testenv]deps}
botocore>=1.4.87
commands =
- pytest {posargs:--cov-config=pyproject.toml --cov=scrapy --cov-report=xml --cov-report= tests --junitxml=botocore.junit.xml -o junit_family=legacy -m requires_botocore}
+ pytest {posargs:--cov-config=pyproject.toml --cov=scrapy --cov-report=xml --cov-report= tests --junitxml=botocore.junit.xml -o junit_family=legacy} -m requires_botocore
[testenv:botocore-pinned]
basepython = {[pinned]basepython}
@@ -269,4 +263,17 @@ install_command = {[pinned]install_comma
setenv =
{[pinned]setenv}
commands =
- pytest {posargs:--cov-config=pyproject.toml --cov=scrapy --cov-report=xml --cov-report= tests --junitxml=botocore-pinned.junit.xml -o junit_family=legacy -m requires_botocore}
+ pytest {posargs:--cov-config=pyproject.toml --cov=scrapy --cov-report=xml --cov-report= tests --junitxml=botocore-pinned.junit.xml -o junit_family=legacy} -m requires_botocore
+
+
+# Run proxy tests that use mitmproxy in a separate env to avoid installing
+# numerous mitmproxy deps in other envs (even in extra-deps), as they can
+# conflict with other deps we want, or don't want, to have installed there.
+
+[testenv:mitmproxy]
+deps =
+ {[testenv]deps}
+ # mitmproxy does not support PyPy
+ mitmproxy; implementation_name != "pypy"
+commands =
+ pytest {posargs:--cov-config=pyproject.toml --cov=scrapy --cov-report=xml --cov-report= tests --junitxml=botocore.junit.xml -o junit_family=legacy} -m requires_mitmproxy

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:733a039c7423e52b69bf2810b5332093d4e42a848460359c07b02ecff8f73ebe
size 1176726

3
_multibuild Normal file
View File

@@ -0,0 +1,3 @@
<multibuild>
<package>test</package>
</multibuild>

18
no-dark-mode.patch Normal file
View File

@@ -0,0 +1,18 @@
Index: scrapy-2.13.3/docs/conf.py
===================================================================
--- scrapy-2.13.3.orig/docs/conf.py
+++ scrapy-2.13.3/docs/conf.py
@@ -34,7 +34,7 @@ extensions = [
"sphinx.ext.coverage",
"sphinx.ext.intersphinx",
"sphinx.ext.viewcode",
- "sphinx_rtd_dark_mode",
+ "sphinx_rtd_theme",
]
templates_path = ["_templates"]
@@ -158,4 +158,3 @@ intersphinx_mapping = {
intersphinx_disabled_reftypes: Sequence[str] = []
# -- Other options ------------------------------------------------------------
-default_dark_mode = False

View File

@@ -1,3 +1,62 @@
-------------------------------------------------------------------
Mon Nov 17 10:58:13 UTC 2025 - Daniel Garcia <daniel.garcia@suse.com>
- Update CVE-2025-6176.patch to reflect the latest changes upstream to
the patch.
- Remove the CVE-2025-6176-testfile-bomb-br-64GiB.bin source, it's not
needed anymore.
gh#scrapy/scrapy#7134, bsc#1252945, CVE-2025-6176)
-------------------------------------------------------------------
Wed Nov 12 12:28:41 UTC 2025 - Daniel Garcia <daniel.garcia@suse.com>
- Use libalternatives
- Use multibuild to run tests in a subpackage
- add upstream patch CVE-2025-6176.patch to mitigate brotli and
deflate decompression bombs DoS.
This patch adds a new bin test file that was added as a new source
as CVE-2025-6176-testfile-bomb-br-64GiB.bin
gh#scrapy/scrapy#7134, bsc#1252945, CVE-2025-6176)
-------------------------------------------------------------------
Thu Jul 31 05:18:40 UTC 2025 - Steve Kowalik <steven.kowalik@suse.com>
- Update to 2.13.3:
* Changed the values for DOWNLOAD_DELAY (from 0 to 1) and
CONCURRENT_REQUESTS_PER_DOMAIN (from 8 to 1) in the default project
template.
* Fixed several bugs in the engine initialization and exception handling
logic.
* Allowed running tests with Twisted 25.5.0+ again and fixed test failures
with lxml 6.0.0.
* Give callback requests precedence over start requests when priority
values are the same.
* The asyncio reactor is now enabled by default
* Replaced start_requests() (sync) with start() (async) and changed how it
is iterated.
* Added the allow_offsite request meta key
* Spider middlewares that don't support asynchronous spider output are
deprecated
* Added a base class for universal spider middlewares
- Add patch remove-hoverxref.patch:
* Do not use deprecated sphinx-hoverxref extension.
- Add patch no-dark-mode.patch:
* Do not use unavailable sphinx-rtd-dark-mode extension.
-------------------------------------------------------------------
Thu Mar 27 05:45:59 UTC 2025 - Steve Kowalik <steven.kowalik@suse.com>
- Normalize metadata directory name.
-------------------------------------------------------------------
Tue Dec 3 08:24:29 UTC 2024 - Steve Kowalik <steven.kowalik@suse.com>
- Update to 2.12.0:
* Dropped support for Python 3.8, added support for Python 3.13
* start_requests can now yield items
* Added scrapy.http.JsonResponse
* Added the CLOSESPIDER_PAGECOUNT_NO_ITEM setting
-------------------------------------------------------------------
Thu Jul 11 10:38:36 UTC 2024 - Dirk Müller <dmueller@suse.com>

View File

@@ -1,7 +1,7 @@
#
# spec file for package python-Scrapy
#
# Copyright (c) 2024 SUSE LLC
# Copyright (c) 2025 SUSE LLC and contributors
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
@@ -16,22 +16,47 @@
#
%global flavor @BUILD_FLAVOR@%{nil}
%if "%{flavor}" == "test"
%define psuffix -test
%bcond_without test
%endif
%if "%{flavor}" == ""
%define psuffix %{nil}
%bcond_with test
%endif
%if 0%{?suse_version} > 1500
%bcond_without libalternatives
%else
%bcond_with libalternatives
%endif
%{?sle15_python_module_pythons}
Name: python-Scrapy
Version: 2.11.2
Name: python-Scrapy%{?psuffix}
Version: 2.13.3
Release: 0
Summary: A high-level Python Screen Scraping framework
License: BSD-3-Clause
Group: Development/Languages/Python
URL: https://scrapy.org
Source: https://files.pythonhosted.org/packages/source/s/scrapy/scrapy-%{version}.tar.gz
BuildRequires: %{python_module Brotli}
# PATCH-FIX-UPSTREAM gh#scrapy/scrapy#6922
Patch0: remove-hoverxref.patch
# PATCH-FIX-OPENSUSE No sphinx-rtd-dark-mode
Patch1: no-dark-mode.patch
# PATCH-FIX-UPSTREAM CVE-2025-6176.patch gh#scrapy/scrapy#7134
Patch2: CVE-2025-6176.patch
BuildRequires: %{python_module base >= 3.9}
BuildRequires: %{python_module hatchling}
BuildRequires: %{python_module pip}
BuildRequires: %{python_module wheel}
%if %{with test}
# Test requirements:
BuildRequires: %{python_module Scrapy = %{version}}
BuildRequires: %{python_module Brotli >= 1.2.0}
BuildRequires: %{python_module Pillow}
BuildRequires: %{python_module Protego}
BuildRequires: %{python_module PyDispatcher >= 2.0.5}
BuildRequires: %{python_module Twisted >= 18.9.0}
BuildRequires: %{python_module attrs}
BuildRequires: %{python_module base >= 3.8}
BuildRequires: %{python_module botocore >= 1.4.87}
BuildRequires: %{python_module cryptography >= 36.0.0}
BuildRequires: %{python_module cssselect >= 0.9.1}
@@ -42,24 +67,24 @@ BuildRequires: %{python_module itemloaders >= 1.0.1}
BuildRequires: %{python_module lxml >= 4.4.1}
BuildRequires: %{python_module parsel >= 1.5.0}
BuildRequires: %{python_module pexpect >= 4.8.1}
BuildRequires: %{python_module pip}
BuildRequires: %{python_module pyOpenSSL >= 21.0.0}
BuildRequires: %{python_module pyftpdlib >= 1.5.8}
BuildRequires: %{python_module pytest-xdist}
BuildRequires: %{python_module pytest}
BuildRequires: %{python_module queuelib >= 1.4.2}
BuildRequires: %{python_module service_identity >= 18.1.0}
BuildRequires: %{python_module setuptools}
BuildRequires: %{python_module sybil}
BuildRequires: %{python_module testfixtures}
BuildRequires: %{python_module tldextract}
BuildRequires: %{python_module uvloop}
BuildRequires: %{python_module w3lib >= 1.17.0}
BuildRequires: %{python_module wheel}
BuildRequires: %{python_module zope.interface >= 5.1.0}
%endif
BuildRequires: fdupes
BuildRequires: python-rpm-macros
BuildRequires: python3-Sphinx
BuildRequires: python3-sphinx-notfound-page
BuildRequires: python3-sphinx_rtd_theme
Requires: python-Protego >= 0.1.15
Requires: python-PyDispatcher >= 2.0.5
Requires: python-Twisted >= 18.9.0
@@ -74,13 +99,17 @@ Requires: python-parsel >= 1.5.0
Requires: python-pyOpenSSL >= 21.0.0
Requires: python-queuelib >= 1.4.2
Requires: python-service_identity >= 18.1.0
Requires: python-setuptools
Requires: python-tldextract
Requires: python-w3lib >= 1.17.2
Requires: python-zope.interface >= 5.1.0
BuildArch: noarch
%if %{with libalternatives}
BuildRequires: alts
Requires: alts
%else
Requires(post): update-alternatives
Requires(postun): update-alternatives
BuildArch: noarch
%endif
%python_subpackages
%description
@@ -90,7 +119,6 @@ retrieval to monitoring or testing web sites.
%package -n %{name}-doc
Summary: Documentation for %{name}
Group: Documentation/HTML
%description -n %{name}-doc
Provides documentation for %{name}.
@@ -100,6 +128,7 @@ Provides documentation for %{name}.
sed -i -e 's:= python:= python3:g' docs/Makefile
%if %{without test}
%build
%pyproject_wheel
pushd docs
@@ -110,7 +139,9 @@ popd
%pyproject_install
%python_clone -a %{buildroot}%{_bindir}/scrapy
%python_expand %fdupes %{buildroot}%{$python_sitelib}
%endif
%if %{with test}
%check
# no color in obs chroot console
skiplist="test_pformat"
@@ -118,10 +149,18 @@ skiplist="test_pformat"
skiplist="$skiplist or CheckCommandTest or test_file_path"
# Flaky test gh#scrapy/scrapy#5703
skiplist="$skiplist or test_start_requests_laziness"
# Fails on 32 bit arches
skiplist="$skiplist or test_queue_push_pop_priorities"
%{pytest -x \
-k "not (${skiplist})" \
-W ignore::DeprecationWarning \
tests}
%endif
%if %{without test}
%pre
# If libalternatives is used: Removing old update-alternatives entries.
%python_libalternatives_reset_alternative scrapy
%post
%python_install_alternative scrapy
@@ -133,10 +172,11 @@ skiplist="$skiplist or test_start_requests_laziness"
%license LICENSE
%doc AUTHORS README.rst
%{python_sitelib}/scrapy
%{python_sitelib}/Scrapy-%{version}.dist-info
%{python_sitelib}/[Ss]crapy-%{version}.dist-info
%python_alternative %{_bindir}/scrapy
%files -n %{name}-doc
%doc docs/build/html
%endif
%changelog

56
remove-hoverxref.patch Normal file
View File

@@ -0,0 +1,56 @@
From 549730c23592479f200f3c1f941c59f68c510ff5 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Adri=C3=A1n=20Chaves?= <adrian@chaves.io>
Date: Sat, 28 Jun 2025 12:32:55 +0200
Subject: [PATCH] Remove the deprecated sphinx-hoverxref
---
docs/conf.py | 20 +-------------------
docs/requirements.txt | 1 -
2 files changed, 1 insertion(+), 20 deletions(-)
diff --git a/docs/conf.py b/docs/conf.py
index 493a6297624..0345ec69543 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -26,7 +26,6 @@
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = [
- "hoverxref.extension",
"notfound.extension",
"scrapydocs",
"sphinx.ext.autodoc",
@@ -157,22 +156,5 @@
}
intersphinx_disabled_reftypes: Sequence[str] = []
-
-# -- Options for sphinx-hoverxref extension ----------------------------------
-# https://sphinx-hoverxref.readthedocs.io/en/latest/configuration.html
-
-hoverxref_auto_ref = True
-hoverxref_role_types = {
- "class": "tooltip",
- "command": "tooltip",
- "confval": "tooltip",
- "hoverxref": "tooltip",
- "mod": "tooltip",
- "ref": "tooltip",
- "reqmeta": "tooltip",
- "setting": "tooltip",
- "signal": "tooltip",
-}
-hoverxref_roles = ["command", "reqmeta", "setting", "signal"]
-
+# -- Other options ------------------------------------------------------------
default_dark_mode = False
diff --git a/docs/requirements.txt b/docs/requirements.txt
index 103fb08d667..4b382b11eb9 100644
--- a/docs/requirements.txt
+++ b/docs/requirements.txt
@@ -1,5 +1,4 @@
sphinx==8.1.3
-sphinx-hoverxref==1.4.2
sphinx-notfound-page==1.0.4
sphinx-rtd-theme==3.0.2
sphinx-rtd-dark-mode==1.3.0

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:dfbd565384fc3fffeba121f5a3a2d0899ac1f756d41432ca0879933fbfb3401d
size 1187710

3
scrapy-2.13.3.tar.gz Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:bf17588c10e46a9d70c49a05380b749e3c7fba58204a367a5747ce6da2bd204d
size 1220051