Accepting request 1200888 from devel:languages:python:Factory

- Add doc-py38-to-py36.patch making building documentation
  compatible with Python 3.6, which runs Sphinx on SLE.
- Update to 3.12.6:
  - Tests
    - gh-101525: Skip test_gdb if the binary is relocated by
      BOLT. Patch by Donghee Na.
  - Security
    - gh-123678: Upgrade libexpat to 2.6.3
    - gh-121285: Remove backtracking from tarfile header parsing
      for hdrcharset, PAX, and GNU sparse headers (bsc#1230227,
      CVE-2024-6232).
  - Library
    - gh-123270: Applied a more surgical fix for malformed
      payloads in zipfile.Path causing infinite loops (gh-122905)
      without breaking contents using legitimate characters
      (bsc#1229704, CVE-2024-8088).
    - gh-123213: xml.etree.ElementTree.Element.extend() and
      Element assignment no longer hide the internal exception if
      an erronous generator is passed. Patch by Bar Harel.
    - gh-85110: Preserve relative path in URL without netloc in
      urllib.parse.urlunsplit() and urllib.parse.urlunparse().
    - gh-123067: Fix quadratic complexity in parsing "-quoted
      cookie values with backslashes by http.cookies
      (bsc#1229596, CVE-2024-7592)
    - gh-122903: zipfile.Path.glob now correctly matches
      directories instead of silently omitting them.
    - gh-122905: zipfile.Path objects now sanitize names from the
      zipfile.
    - gh-122695: Fixed double-free when using gc.get_referents()
      with a freed asyncio.Future iterator.
    - gh-116263: logging.handlers.RotatingFileHandler no longer
      rolls over empty log files.
    - gh-118814: Fix the typing.TypeVar constructor when name is
      passed by keyword.
    - gh-122478: Remove internal frames from tracebacks
      shown in code.InteractiveInterpreter with non-default
      sys.excepthook(). Save correct tracebacks in
      sys.last_traceback and update __traceback__ attribute of
      sys.last_value and sys.last_exc.
    - gh-113785: csv now correctly parses numeric fields (when
      used with csv.QUOTE_NONNUMERIC) which start with an escape
      character.
    - gh-112182: asyncio.futures.Future.set_exception() now
      transforms StopIteration into RuntimeError instead of
      hanging or other misbehavior. Patch contributed by Jamie
      Phan.
    - gh-108172: webbrowser honors OS preferred browser on Linux
      when its desktop entry name contains the text of a known
      browser name.
    - gh-102988: email.utils.getaddresses() and
      email.utils.parseaddr() now return ('', '') 2-tuples
      in more situations where invalid email addresses are
      encountered instead of potentially inaccurate values. Add
      optional strict parameter to these two functions: use
      strict=False to get the old behavior, accept malformed
      inputs. getattr(email.utils, 'supports_strict_parsing',
      False) can be use to check if the strict paramater is
      available. Patch by Thomas Dwyer and Victor Stinner to
      improve the CVE-2023-27043 fix.
    - gh-99437: runpy.run_path() now decodes path-like objects,
      making sure __file__ and sys.argv[0] of the module being
      run are always strings.
  - IDLE
    - gh-120083: Add explicit black IDLE Hovertip foreground
      color needed for recent macOS. Fixes Sonoma showing
      unreadable white on pale yellow. Patch by John Riggles.
  - Core and Builtins
    - gh-123321: Prevent Parser/myreadline race condition from
      segfaulting on multi-threaded use. Patch by Bar Harel and
      Amit Wienner.
    - gh-122982: Extend the deprecation period for bool inversion
      (~) by two years.
    - gh-123229: Fix valgrind warning by initializing the
      f-string buffers to 0 in the tokenizer. Patch by Pablo
      Galindo
    - gh-123142: Fix too-wide source location in exception
      tracebacks coming from broken iterables in comprehensions.
    - gh-123048: Fix a bug where pattern matching code could emit
      a JUMP_FORWARD with no source location.
    - gh-123083: Fix a potential use-after-free in
      STORE_ATTR_WITH_HINT.
    - gh-122527: Fix a crash that occurred when a
      PyStructSequence was deallocated after its type’s
      dictionary was cleared by the GC. The type’s tp_basicsize
      now accounts for non-sequence fields that aren’t included
      in the Py_SIZE of the sequence.
    - gh-93691: Fix source locations of instructions generated
      for with statements.
  - Build
    - gh-123297: Propagate the value of LDFLAGS to LDCXXSHARED in
      sysconfig. Patch by Pablo Galindo
- Remove upstreamed patches:
  - CVE-2023-27043-email-parsing-errors.patch
  - CVE-2024-8088-inf-loop-zipfile_Path.patch
  - CVE-2023-6597-TempDir-cleaning-symlink.patch
  - gh120226-fix-sendfile-test-kernel-610.patch
- Add gh120226-fix-sendfile-test-kernel-610.patch to avoid
  failing test_sendfile_close_peer_in_the_middle_of_receiving
  tests on Linux >= 6.10 (GH-120227).

OBS-URL: https://build.opensuse.org/request/show/1200888
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/python312?expand=0&rev=20
This commit is contained in:
Ana Guerrero 2024-09-26 16:52:41 +00:00 committed by Git OBS Bridge
commit 5d2f502703
13 changed files with 554 additions and 864 deletions

View File

@ -1,474 +0,0 @@
From 4a153a1d3b18803a684cd1bcc2cdf3ede3dbae19 Mon Sep 17 00:00:00 2001
From: Victor Stinner <vstinner@python.org>
Date: Fri, 15 Dec 2023 16:10:40 +0100
Subject: [PATCH] [CVE-2023-27043] gh-102988: Reject malformed addresses in
email.parseaddr() (#111116)
Detect email address parsing errors and return empty tuple to
indicate the parsing error (old API). Add an optional 'strict'
parameter to getaddresses() and parseaddr() functions. Patch by
Thomas Dwyer.
Co-Authored-By: Thomas Dwyer <github@tomd.tel>
---
Doc/library/email.utils.rst | 19 -
Lib/email/utils.py | 151 +++++++-
Lib/test/test_email/test_email.py | 187 +++++++++-
Misc/NEWS.d/next/Library/2023-10-20-15-28-08.gh-issue-102988.dStNO7.rst | 8
4 files changed, 344 insertions(+), 21 deletions(-)
create mode 100644 Misc/NEWS.d/next/Library/2023-10-20-15-28-08.gh-issue-102988.dStNO7.rst
--- a/Doc/library/email.utils.rst
+++ b/Doc/library/email.utils.rst
@@ -58,13 +58,18 @@ of the new API.
begins with angle brackets, they are stripped off.
-.. function:: parseaddr(address)
+.. function:: parseaddr(address, *, strict=True)
Parse address -- which should be the value of some address-containing field such
as :mailheader:`To` or :mailheader:`Cc` -- into its constituent *realname* and
*email address* parts. Returns a tuple of that information, unless the parse
fails, in which case a 2-tuple of ``('', '')`` is returned.
+ If *strict* is true, use a strict parser which rejects malformed inputs.
+
+ .. versionchanged:: 3.13
+ Add *strict* optional parameter and reject malformed inputs by default.
+
.. function:: formataddr(pair, charset='utf-8')
@@ -82,12 +87,15 @@ of the new API.
Added the *charset* option.
-.. function:: getaddresses(fieldvalues)
+.. function:: getaddresses(fieldvalues, *, strict=True)
This method returns a list of 2-tuples of the form returned by ``parseaddr()``.
*fieldvalues* is a sequence of header field values as might be returned by
- :meth:`Message.get_all <email.message.Message.get_all>`. Here's a simple
- example that gets all the recipients of a message::
+ :meth:`Message.get_all <email.message.Message.get_all>`.
+
+ If *strict* is true, use a strict parser which rejects malformed inputs.
+
+ Here's a simple example that gets all the recipients of a message::
from email.utils import getaddresses
@@ -97,6 +105,9 @@ of the new API.
resent_ccs = msg.get_all('resent-cc', [])
all_recipients = getaddresses(tos + ccs + resent_tos + resent_ccs)
+ .. versionchanged:: 3.13
+ Add *strict* optional parameter and reject malformed inputs by default.
+
.. function:: parsedate(date)
--- a/Lib/email/utils.py
+++ b/Lib/email/utils.py
@@ -48,6 +48,7 @@ TICK = "'"
specialsre = re.compile(r'[][\\()<>@,:;".]')
escapesre = re.compile(r'[\\"]')
+
def _has_surrogates(s):
"""Return True if s may contain surrogate-escaped binary data."""
# This check is based on the fact that unless there are surrogates, utf8
@@ -106,12 +107,127 @@ def formataddr(pair, charset='utf-8'):
return address
+def _iter_escaped_chars(addr):
+ pos = 0
+ escape = False
+ for pos, ch in enumerate(addr):
+ if escape:
+ yield (pos, '\\' + ch)
+ escape = False
+ elif ch == '\\':
+ escape = True
+ else:
+ yield (pos, ch)
+ if escape:
+ yield (pos, '\\')
+
+
+def _strip_quoted_realnames(addr):
+ """Strip real names between quotes."""
+ if '"' not in addr:
+ # Fast path
+ return addr
+
+ start = 0
+ open_pos = None
+ result = []
+ for pos, ch in _iter_escaped_chars(addr):
+ if ch == '"':
+ if open_pos is None:
+ open_pos = pos
+ else:
+ if start != open_pos:
+ result.append(addr[start:open_pos])
+ start = pos + 1
+ open_pos = None
-def getaddresses(fieldvalues):
- """Return a list of (REALNAME, EMAIL) for each fieldvalue."""
- all = COMMASPACE.join(str(v) for v in fieldvalues)
- a = _AddressList(all)
- return a.addresslist
+ if start < len(addr):
+ result.append(addr[start:])
+
+ return ''.join(result)
+
+
+supports_strict_parsing = True
+
+def getaddresses(fieldvalues, *, strict=True):
+ """Return a list of (REALNAME, EMAIL) or ('','') for each fieldvalue.
+
+ When parsing fails for a fieldvalue, a 2-tuple of ('', '') is returned in
+ its place.
+
+ If strict is true, use a strict parser which rejects malformed inputs.
+ """
+
+ # If strict is true, if the resulting list of parsed addresses is greater
+ # than the number of fieldvalues in the input list, a parsing error has
+ # occurred and consequently a list containing a single empty 2-tuple [('',
+ # '')] is returned in its place. This is done to avoid invalid output.
+ #
+ # Malformed input: getaddresses(['alice@example.com <bob@example.com>'])
+ # Invalid output: [('', 'alice@example.com'), ('', 'bob@example.com')]
+ # Safe output: [('', '')]
+
+ if not strict:
+ all = COMMASPACE.join(str(v) for v in fieldvalues)
+ a = _AddressList(all)
+ return a.addresslist
+
+ fieldvalues = [str(v) for v in fieldvalues]
+ fieldvalues = _pre_parse_validation(fieldvalues)
+ addr = COMMASPACE.join(fieldvalues)
+ a = _AddressList(addr)
+ result = _post_parse_validation(a.addresslist)
+
+ # Treat output as invalid if the number of addresses is not equal to the
+ # expected number of addresses.
+ n = 0
+ for v in fieldvalues:
+ # When a comma is used in the Real Name part it is not a deliminator.
+ # So strip those out before counting the commas.
+ v = _strip_quoted_realnames(v)
+ # Expected number of addresses: 1 + number of commas
+ n += 1 + v.count(',')
+ if len(result) != n:
+ return [('', '')]
+
+ return result
+
+
+def _check_parenthesis(addr):
+ # Ignore parenthesis in quoted real names.
+ addr = _strip_quoted_realnames(addr)
+
+ opens = 0
+ for pos, ch in _iter_escaped_chars(addr):
+ if ch == '(':
+ opens += 1
+ elif ch == ')':
+ opens -= 1
+ if opens < 0:
+ return False
+ return (opens == 0)
+
+
+def _pre_parse_validation(email_header_fields):
+ accepted_values = []
+ for v in email_header_fields:
+ if not _check_parenthesis(v):
+ v = "('', '')"
+ accepted_values.append(v)
+
+ return accepted_values
+
+
+def _post_parse_validation(parsed_email_header_tuples):
+ accepted_values = []
+ # The parser would have parsed a correctly formatted domain-literal
+ # The existence of an [ after parsing indicates a parsing failure
+ for v in parsed_email_header_tuples:
+ if '[' in v[1]:
+ v = ('', '')
+ accepted_values.append(v)
+
+ return accepted_values
def _format_timetuple_and_zone(timetuple, zone):
@@ -205,16 +321,33 @@ def parsedate_to_datetime(data):
tzinfo=datetime.timezone(datetime.timedelta(seconds=tz)))
-def parseaddr(addr):
+def parseaddr(addr, *, strict=True):
"""
Parse addr into its constituent realname and email address parts.
Return a tuple of realname and email address, unless the parse fails, in
which case return a 2-tuple of ('', '').
+
+ If strict is True, use a strict parser which rejects malformed inputs.
"""
- addrs = _AddressList(addr).addresslist
- if not addrs:
- return '', ''
+ if not strict:
+ addrs = _AddressList(addr).addresslist
+ if not addrs:
+ return ('', '')
+ return addrs[0]
+
+ if isinstance(addr, list):
+ addr = addr[0]
+
+ if not isinstance(addr, str):
+ return ('', '')
+
+ addr = _pre_parse_validation([addr])[0]
+ addrs = _post_parse_validation(_AddressList(addr).addresslist)
+
+ if not addrs or len(addrs) > 1:
+ return ('', '')
+
return addrs[0]
--- a/Lib/test/test_email/test_email.py
+++ b/Lib/test/test_email/test_email.py
@@ -16,6 +16,7 @@ from unittest.mock import patch
import email
import email.policy
+import email.utils
from email.charset import Charset
from email.generator import Generator, DecodedGenerator, BytesGenerator
@@ -3352,15 +3353,137 @@ Foo
],
)
+ def test_parsing_errors(self):
+ """Test for parsing errors from CVE-2023-27043 and CVE-2019-16056"""
+ alice = 'alice@example.org'
+ bob = 'bob@example.com'
+ empty = ('', '')
+
+ # Test utils.getaddresses() and utils.parseaddr() on malformed email
+ # addresses: default behavior (strict=True) rejects malformed address,
+ # and strict=False which tolerates malformed address.
+ for invalid_separator, expected_non_strict in (
+ ('(', [(f'<{bob}>', alice)]),
+ (')', [('', alice), empty, ('', bob)]),
+ ('<', [('', alice), empty, ('', bob), empty]),
+ ('>', [('', alice), empty, ('', bob)]),
+ ('[', [('', f'{alice}[<{bob}>]')]),
+ (']', [('', alice), empty, ('', bob)]),
+ ('@', [empty, empty, ('', bob)]),
+ (';', [('', alice), empty, ('', bob)]),
+ (':', [('', alice), ('', bob)]),
+ ('.', [('', alice + '.'), ('', bob)]),
+ ('"', [('', alice), ('', f'<{bob}>')]),
+ ):
+ address = f'{alice}{invalid_separator}<{bob}>'
+ with self.subTest(address=address):
+ self.assertEqual(utils.getaddresses([address]),
+ [empty])
+ self.assertEqual(utils.getaddresses([address], strict=False),
+ expected_non_strict)
+
+ self.assertEqual(utils.parseaddr([address]),
+ empty)
+ self.assertEqual(utils.parseaddr([address], strict=False),
+ ('', address))
+
+ # Comma (',') is treated differently depending on strict parameter.
+ # Comma without quotes.
+ address = f'{alice},<{bob}>'
+ self.assertEqual(utils.getaddresses([address]),
+ [('', alice), ('', bob)])
+ self.assertEqual(utils.getaddresses([address], strict=False),
+ [('', alice), ('', bob)])
+ self.assertEqual(utils.parseaddr([address]),
+ empty)
+ self.assertEqual(utils.parseaddr([address], strict=False),
+ ('', address))
+
+ # Real name between quotes containing comma.
+ address = '"Alice, alice@example.org" <bob@example.com>'
+ expected_strict = ('Alice, alice@example.org', 'bob@example.com')
+ self.assertEqual(utils.getaddresses([address]), [expected_strict])
+ self.assertEqual(utils.getaddresses([address], strict=False), [expected_strict])
+ self.assertEqual(utils.parseaddr([address]), expected_strict)
+ self.assertEqual(utils.parseaddr([address], strict=False),
+ ('', address))
+
+ # Valid parenthesis in comments.
+ address = 'alice@example.org (Alice)'
+ expected_strict = ('Alice', 'alice@example.org')
+ self.assertEqual(utils.getaddresses([address]), [expected_strict])
+ self.assertEqual(utils.getaddresses([address], strict=False), [expected_strict])
+ self.assertEqual(utils.parseaddr([address]), expected_strict)
+ self.assertEqual(utils.parseaddr([address], strict=False),
+ ('', address))
+
+ # Invalid parenthesis in comments.
+ address = 'alice@example.org )Alice('
+ self.assertEqual(utils.getaddresses([address]), [empty])
+ self.assertEqual(utils.getaddresses([address], strict=False),
+ [('', 'alice@example.org'), ('', ''), ('', 'Alice')])
+ self.assertEqual(utils.parseaddr([address]), empty)
+ self.assertEqual(utils.parseaddr([address], strict=False),
+ ('', address))
+
+ # Two addresses with quotes separated by comma.
+ address = '"Jane Doe" <jane@example.net>, "John Doe" <john@example.net>'
+ self.assertEqual(utils.getaddresses([address]),
+ [('Jane Doe', 'jane@example.net'),
+ ('John Doe', 'john@example.net')])
+ self.assertEqual(utils.getaddresses([address], strict=False),
+ [('Jane Doe', 'jane@example.net'),
+ ('John Doe', 'john@example.net')])
+ self.assertEqual(utils.parseaddr([address]), empty)
+ self.assertEqual(utils.parseaddr([address], strict=False),
+ ('', address))
+
+ # Test email.utils.supports_strict_parsing attribute
+ self.assertEqual(email.utils.supports_strict_parsing, True)
+
def test_getaddresses_nasty(self):
- eq = self.assertEqual
- eq(utils.getaddresses(['foo: ;']), [('', '')])
- eq(utils.getaddresses(
- ['[]*-- =~$']),
- [('', ''), ('', ''), ('', '*--')])
- eq(utils.getaddresses(
- ['foo: ;', '"Jason R. Mastaler" <jason@dom.ain>']),
- [('', ''), ('Jason R. Mastaler', 'jason@dom.ain')])
+ for addresses, expected in (
+ (['"Sürname, Firstname" <to@example.com>'],
+ [('Sürname, Firstname', 'to@example.com')]),
+
+ (['foo: ;'],
+ [('', '')]),
+
+ (['foo: ;', '"Jason R. Mastaler" <jason@dom.ain>'],
+ [('', ''), ('Jason R. Mastaler', 'jason@dom.ain')]),
+
+ ([r'Pete(A nice \) chap) <pete(his account)@silly.test(his host)>'],
+ [('Pete (A nice ) chap his account his host)', 'pete@silly.test')]),
+
+ (['(Empty list)(start)Undisclosed recipients :(nobody(I know))'],
+ [('', '')]),
+
+ (['Mary <@machine.tld:mary@example.net>, , jdoe@test . example'],
+ [('Mary', 'mary@example.net'), ('', ''), ('', 'jdoe@test.example')]),
+
+ (['John Doe <jdoe@machine(comment). example>'],
+ [('John Doe (comment)', 'jdoe@machine.example')]),
+
+ (['"Mary Smith: Personal Account" <smith@home.example>'],
+ [('Mary Smith: Personal Account', 'smith@home.example')]),
+
+ (['Undisclosed recipients:;'],
+ [('', '')]),
+
+ ([r'<boss@nil.test>, "Giant; \"Big\" Box" <bob@example.net>'],
+ [('', 'boss@nil.test'), ('Giant; "Big" Box', 'bob@example.net')]),
+ ):
+ with self.subTest(addresses=addresses):
+ self.assertEqual(utils.getaddresses(addresses),
+ expected)
+ self.assertEqual(utils.getaddresses(addresses, strict=False),
+ expected)
+
+ addresses = ['[]*-- =~$']
+ self.assertEqual(utils.getaddresses(addresses),
+ [('', '')])
+ self.assertEqual(utils.getaddresses(addresses, strict=False),
+ [('', ''), ('', ''), ('', '*--')])
def test_getaddresses_embedded_comment(self):
"""Test proper handling of a nested comment"""
@@ -3551,6 +3674,54 @@ multipart/report
m = cls(*constructor, policy=email.policy.default)
self.assertIs(m.policy, email.policy.default)
+ def test_iter_escaped_chars(self):
+ self.assertEqual(list(utils._iter_escaped_chars(r'a\\b\"c\\"d')),
+ [(0, 'a'),
+ (2, '\\\\'),
+ (3, 'b'),
+ (5, '\\"'),
+ (6, 'c'),
+ (8, '\\\\'),
+ (9, '"'),
+ (10, 'd')])
+ self.assertEqual(list(utils._iter_escaped_chars('a\\')),
+ [(0, 'a'), (1, '\\')])
+
+ def test_strip_quoted_realnames(self):
+ def check(addr, expected):
+ self.assertEqual(utils._strip_quoted_realnames(addr), expected)
+
+ check('"Jane Doe" <jane@example.net>, "John Doe" <john@example.net>',
+ ' <jane@example.net>, <john@example.net>')
+ check(r'"Jane \"Doe\"." <jane@example.net>',
+ ' <jane@example.net>')
+
+ # special cases
+ check(r'before"name"after', 'beforeafter')
+ check(r'before"name"', 'before')
+ check(r'b"name"', 'b') # single char
+ check(r'"name"after', 'after')
+ check(r'"name"a', 'a') # single char
+ check(r'"name"', '')
+
+ # no change
+ for addr in (
+ 'Jane Doe <jane@example.net>, John Doe <john@example.net>',
+ 'lone " quote',
+ ):
+ self.assertEqual(utils._strip_quoted_realnames(addr), addr)
+
+
+ def test_check_parenthesis(self):
+ addr = 'alice@example.net'
+ self.assertTrue(utils._check_parenthesis(f'{addr} (Alice)'))
+ self.assertFalse(utils._check_parenthesis(f'{addr} )Alice('))
+ self.assertFalse(utils._check_parenthesis(f'{addr} (Alice))'))
+ self.assertFalse(utils._check_parenthesis(f'{addr} ((Alice)'))
+
+ # Ignore real name between quotes
+ self.assertTrue(utils._check_parenthesis(f'")Alice((" {addr}'))
+
# Test the iterator/generators
class TestIterators(TestEmailBase):
--- /dev/null
+++ b/Misc/NEWS.d/next/Library/2023-10-20-15-28-08.gh-issue-102988.dStNO7.rst
@@ -0,0 +1,8 @@
+:func:`email.utils.getaddresses` and :func:`email.utils.parseaddr` now
+return ``('', '')`` 2-tuples in more situations where invalid email
+addresses are encountered instead of potentially inaccurate values. Add
+optional *strict* parameter to these two functions: use ``strict=False`` to
+get the old behavior, accept malformed inputs.
+``getattr(email.utils, 'supports_strict_parsing', False)`` can be use to check
+if the *strict* paramater is available. Patch by Thomas Dwyer and Victor
+Stinner to improve the CVE-2023-27043 fix.

View File

@ -1,7 +1,36 @@
Index: Python-3.12.3/Lib/test/test_xml_etree.py
===================================================================
--- Python-3.12.3.orig/Lib/test/test_xml_etree.py
+++ Python-3.12.3/Lib/test/test_xml_etree.py
---
Lib/test/test_pyexpat.py | 4 ++++
Lib/test/test_sax.py | 3 +++
Lib/test/test_xml_etree.py | 10 ++++++++++
3 files changed, 17 insertions(+)
--- a/Lib/test/test_pyexpat.py
+++ b/Lib/test/test_pyexpat.py
@@ -794,6 +794,10 @@ class ReparseDeferralTest(unittest.TestC
self.assertEqual(started, ['doc'])
def test_reparse_deferral_disabled(self):
+ if expat.version_info < (2, 6, 0):
+ self.skipTest(f'Expat {expat.version_info} does not '
+ 'support reparse deferral')
+
started = []
def start_element(name, _):
--- a/Lib/test/test_sax.py
+++ b/Lib/test/test_sax.py
@@ -1240,6 +1240,9 @@ class ExpatReaderTest(XmlTestBase):
self.assertEqual(result.getvalue(), start + b"<doc></doc>")
+ @unittest.skipIf(pyexpat.version_info < (2, 6, 0),
+ f'Expat {pyexpat.version_info} does not '
+ 'support reparse deferral')
def test_flush_reparse_deferral_disabled(self):
result = BytesIO()
xmlgen = XMLGenerator(result)
--- a/Lib/test/test_xml_etree.py
+++ b/Lib/test/test_xml_etree.py
@@ -121,6 +121,11 @@ ATTLIST_XML = """\
</foo>
"""
@ -36,32 +65,3 @@ Index: Python-3.12.3/Lib/test/test_xml_etree.py
def test_flush_reparse_deferral_disabled(self):
parser = ET.XMLPullParser(events=('start', 'end'))
Index: Python-3.12.3/Lib/test/test_sax.py
===================================================================
--- Python-3.12.3.orig/Lib/test/test_sax.py
+++ Python-3.12.3/Lib/test/test_sax.py
@@ -1240,6 +1240,9 @@ class ExpatReaderTest(XmlTestBase):
self.assertEqual(result.getvalue(), start + b"<doc></doc>")
+ @unittest.skipIf(pyexpat.version_info < (2, 6, 0),
+ f'Expat {pyexpat.version_info} does not '
+ 'support reparse deferral')
def test_flush_reparse_deferral_disabled(self):
result = BytesIO()
xmlgen = XMLGenerator(result)
Index: Python-3.12.3/Lib/test/test_pyexpat.py
===================================================================
--- Python-3.12.3.orig/Lib/test/test_pyexpat.py
+++ Python-3.12.3/Lib/test/test_pyexpat.py
@@ -794,6 +794,10 @@ class ReparseDeferralTest(unittest.TestC
self.assertEqual(started, ['doc'])
def test_reparse_deferral_disabled(self):
+ if expat.version_info < (2, 6, 0):
+ self.skipTest(f'Expat {expat.version_info} does not '
+ 'support reparse deferral')
+
started = []
def start_element(name, _):

View File

@ -1,171 +0,0 @@
---
Lib/tempfile.py | 16 +
Lib/test/test_tempfile.py | 113 ++++++++++
Misc/NEWS.d/next/Library/2022-12-01-16-57-44.gh-issue-91133.LKMVCV.rst | 2
3 files changed, 131 insertions(+)
Index: Python-3.12.4/Lib/tempfile.py
===================================================================
--- Python-3.12.4.orig/Lib/tempfile.py
+++ Python-3.12.4/Lib/tempfile.py
@@ -285,6 +285,22 @@ def _resetperms(path):
_dont_follow_symlinks(chflags, path, 0)
_dont_follow_symlinks(_os.chmod, path, 0o700)
+def _dont_follow_symlinks(func, path, *args):
+ # Pass follow_symlinks=False, unless not supported on this platform.
+ if func in _os.supports_follow_symlinks:
+ func(path, *args, follow_symlinks=False)
+ elif _os.name == 'nt' or not _os.path.islink(path):
+ func(path, *args)
+
+def _resetperms(path):
+ try:
+ chflags = _os.chflags
+ except AttributeError:
+ pass
+ else:
+ _dont_follow_symlinks(chflags, path, 0)
+ _dont_follow_symlinks(_os.chmod, path, 0o700)
+
# User visible interfaces.
Index: Python-3.12.4/Lib/test/test_tempfile.py
===================================================================
--- Python-3.12.4.orig/Lib/test/test_tempfile.py
+++ Python-3.12.4/Lib/test/test_tempfile.py
@@ -1803,6 +1803,103 @@ class TestTemporaryDirectory(BaseTestCas
new_flags = os.stat(dir1).st_flags
self.assertEqual(new_flags, old_flags)
+ @os_helper.skip_unless_symlink
+ def test_cleanup_with_symlink_modes(self):
+ # cleanup() should not follow symlinks when fixing mode bits (#91133)
+ with self.do_create(recurse=0) as d2:
+ file1 = os.path.join(d2, 'file1')
+ open(file1, 'wb').close()
+ dir1 = os.path.join(d2, 'dir1')
+ os.mkdir(dir1)
+ for mode in range(8):
+ mode <<= 6
+ with self.subTest(mode=format(mode, '03o')):
+ def test(target, target_is_directory):
+ d1 = self.do_create(recurse=0)
+ symlink = os.path.join(d1.name, 'symlink')
+ os.symlink(target, symlink,
+ target_is_directory=target_is_directory)
+ try:
+ os.chmod(symlink, mode, follow_symlinks=False)
+ except NotImplementedError:
+ pass
+ try:
+ os.chmod(symlink, mode)
+ except FileNotFoundError:
+ pass
+ os.chmod(d1.name, mode)
+ d1.cleanup()
+ self.assertFalse(os.path.exists(d1.name))
+
+ with self.subTest('nonexisting file'):
+ test('nonexisting', target_is_directory=False)
+ with self.subTest('nonexisting dir'):
+ test('nonexisting', target_is_directory=True)
+
+ with self.subTest('existing file'):
+ os.chmod(file1, mode)
+ old_mode = os.stat(file1).st_mode
+ test(file1, target_is_directory=False)
+ new_mode = os.stat(file1).st_mode
+ self.assertEqual(new_mode, old_mode,
+ '%03o != %03o' % (new_mode, old_mode))
+
+ with self.subTest('existing dir'):
+ os.chmod(dir1, mode)
+ old_mode = os.stat(dir1).st_mode
+ test(dir1, target_is_directory=True)
+ new_mode = os.stat(dir1).st_mode
+ self.assertEqual(new_mode, old_mode,
+ '%03o != %03o' % (new_mode, old_mode))
+
+ @unittest.skipUnless(hasattr(os, 'chflags'), 'requires os.chflags')
+ @os_helper.skip_unless_symlink
+ def test_cleanup_with_symlink_flags(self):
+ # cleanup() should not follow symlinks when fixing flags (#91133)
+ flags = stat.UF_IMMUTABLE | stat.UF_NOUNLINK
+ self.check_flags(flags)
+
+ with self.do_create(recurse=0) as d2:
+ file1 = os.path.join(d2, 'file1')
+ open(file1, 'wb').close()
+ dir1 = os.path.join(d2, 'dir1')
+ os.mkdir(dir1)
+ def test(target, target_is_directory):
+ d1 = self.do_create(recurse=0)
+ symlink = os.path.join(d1.name, 'symlink')
+ os.symlink(target, symlink,
+ target_is_directory=target_is_directory)
+ try:
+ os.chflags(symlink, flags, follow_symlinks=False)
+ except NotImplementedError:
+ pass
+ try:
+ os.chflags(symlink, flags)
+ except FileNotFoundError:
+ pass
+ os.chflags(d1.name, flags)
+ d1.cleanup()
+ self.assertFalse(os.path.exists(d1.name))
+
+ with self.subTest('nonexisting file'):
+ test('nonexisting', target_is_directory=False)
+ with self.subTest('nonexisting dir'):
+ test('nonexisting', target_is_directory=True)
+
+ with self.subTest('existing file'):
+ os.chflags(file1, flags)
+ old_flags = os.stat(file1).st_flags
+ test(file1, target_is_directory=False)
+ new_flags = os.stat(file1).st_flags
+ self.assertEqual(new_flags, old_flags)
+
+ with self.subTest('existing dir'):
+ os.chflags(dir1, flags)
+ old_flags = os.stat(dir1).st_flags
+ test(dir1, target_is_directory=True)
+ new_flags = os.stat(dir1).st_flags
+ self.assertEqual(new_flags, old_flags)
+
@support.cpython_only
def test_del_on_collection(self):
# A TemporaryDirectory is deleted when garbage collected
@@ -1977,6 +2074,22 @@ class TestTemporaryDirectory(BaseTestCas
def check_flags(self, flags):
# skip the test if these flags are not supported (ex: FreeBSD 13)
+ filename = os_helper.TESTFN
+ try:
+ open(filename, "w").close()
+ try:
+ os.chflags(filename, flags)
+ except OSError as exc:
+ # "OSError: [Errno 45] Operation not supported"
+ self.skipTest(f"chflags() doesn't support flags "
+ f"{flags:#b}: {exc}")
+ else:
+ os.chflags(filename, 0)
+ finally:
+ os_helper.unlink(filename)
+
+ def check_flags(self, flags):
+ # skip the test if these flags are not supported (ex: FreeBSD 13)
filename = os_helper.TESTFN
try:
open(filename, "w").close()
Index: Python-3.12.4/Misc/NEWS.d/next/Library/2022-12-01-16-57-44.gh-issue-91133.LKMVCV.rst
===================================================================
--- /dev/null
+++ Python-3.12.4/Misc/NEWS.d/next/Library/2022-12-01-16-57-44.gh-issue-91133.LKMVCV.rst
@@ -0,0 +1,2 @@
+Fix a bug in :class:`tempfile.TemporaryDirectory` cleanup, which now no longer
+dereferences symlinks when working around file system permission errors.

View File

@ -1,148 +0,0 @@
---
Lib/test/test_zipfile/_path/test_path.py | 78 ++++++++++
Lib/zipfile/_path/__init__.py | 18 ++
Misc/NEWS.d/next/Library/2024-08-11-14-08-04.gh-issue-122905.7tDsxA.rst | 1
Misc/NEWS.d/next/Library/2024-08-26-13-45-20.gh-issue-123270.gXHvNJ.rst | 3
4 files changed, 98 insertions(+), 2 deletions(-)
--- a/Lib/test/test_zipfile/_path/test_path.py
+++ b/Lib/test/test_zipfile/_path/test_path.py
@@ -4,6 +4,7 @@ import contextlib
import pathlib
import pickle
import sys
+import time
import unittest
import zipfile
@@ -577,3 +578,80 @@ class TestPath(unittest.TestCase):
zipfile.Path(alpharep)
with self.assertRaises(KeyError):
alpharep.getinfo('does-not-exist')
+
+ def test_malformed_paths(self):
+ """
+ Path should handle malformed paths gracefully.
+
+ Paths with leading slashes are not visible.
+
+ Paths with dots are treated like regular files.
+ """
+ data = io.BytesIO()
+ zf = zipfile.ZipFile(data, "w")
+ zf.writestr("/one-slash.txt", b"content")
+ zf.writestr("//two-slash.txt", b"content")
+ zf.writestr("../parent.txt", b"content")
+ zf.filename = ''
+ root = zipfile.Path(zf)
+ assert list(map(str, root.iterdir())) == ['../']
+ assert root.joinpath('..').joinpath('parent.txt').read_bytes() == b'content'
+
+ def test_unsupported_names(self):
+ """
+ Path segments with special characters are readable.
+
+ On some platforms or file systems, characters like
+ ``:`` and ``?`` are not allowed, but they are valid
+ in the zip file.
+ """
+ data = io.BytesIO()
+ zf = zipfile.ZipFile(data, "w")
+ zf.writestr("path?", b"content")
+ zf.writestr("V: NMS.flac", b"fLaC...")
+ zf.filename = ''
+ root = zipfile.Path(zf)
+ contents = root.iterdir()
+ assert next(contents).name == 'path?'
+ assert next(contents).name == 'V: NMS.flac'
+ assert root.joinpath('V: NMS.flac').read_bytes() == b"fLaC..."
+
+ def test_backslash_not_separator(self):
+ """
+ In a zip file, backslashes are not separators.
+ """
+ data = io.BytesIO()
+ zf = zipfile.ZipFile(data, "w")
+ zf.writestr(DirtyZipInfo.for_name("foo\\bar", zf), b"content")
+ zf.filename = ''
+ root = zipfile.Path(zf)
+ (first,) = root.iterdir()
+ assert not first.is_dir()
+ assert first.name == 'foo\\bar'
+
+
+class DirtyZipInfo(zipfile.ZipInfo):
+ """
+ Bypass name sanitization.
+ """
+
+ def __init__(self, filename, *args, **kwargs):
+ super().__init__(filename, *args, **kwargs)
+ self.filename = filename
+
+ @classmethod
+ def for_name(cls, name, archive):
+ """
+ Construct the same way that ZipFile.writestr does.
+
+ TODO: extract this functionality and re-use
+ """
+ self = cls(filename=name, date_time=time.localtime(time.time())[:6])
+ self.compress_type = archive.compression
+ self.compress_level = archive.compresslevel
+ if self.filename.endswith('/'): # pragma: no cover
+ self.external_attr = 0o40775 << 16 # drwxrwxr-x
+ self.external_attr |= 0x10 # MS-DOS directory flag
+ else:
+ self.external_attr = 0o600 << 16 # ?rw-------
+ return self
--- a/Lib/zipfile/_path/__init__.py
+++ b/Lib/zipfile/_path/__init__.py
@@ -1,3 +1,12 @@
+"""
+A Path-like interface for zipfiles.
+
+This codebase is shared between zipfile.Path in the stdlib
+and zipp in PyPI. See
+https://github.com/python/importlib_metadata/wiki/Development-Methodology
+for more detail.
+"""
+
import io
import posixpath
import zipfile
@@ -34,7 +43,7 @@ def _parents(path):
def _ancestry(path):
"""
Given a path with elements separated by
- posixpath.sep, generate all elements of that path
+ posixpath.sep, generate all elements of that path.
>>> list(_ancestry('b/d'))
['b/d', 'b']
@@ -46,9 +55,14 @@ def _ancestry(path):
['b']
>>> list(_ancestry(''))
[]
+
+ Multiple separators are treated like a single.
+
+ >>> list(_ancestry('//b//d///f//'))
+ ['//b//d///f', '//b//d', '//b']
"""
path = path.rstrip(posixpath.sep)
- while path and path != posixpath.sep:
+ while path.rstrip(posixpath.sep):
yield path
path, tail = posixpath.split(path)
--- /dev/null
+++ b/Misc/NEWS.d/next/Library/2024-08-11-14-08-04.gh-issue-122905.7tDsxA.rst
@@ -0,0 +1 @@
+:class:`zipfile.Path` objects now sanitize names from the zipfile.
--- /dev/null
+++ b/Misc/NEWS.d/next/Library/2024-08-26-13-45-20.gh-issue-123270.gXHvNJ.rst
@@ -0,0 +1,3 @@
+Applied a more surgical fix for malformed payloads in :class:`zipfile.Path`
+causing infinite loops (gh-122905) without breaking contents using
+legitimate characters.

View File

@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fa8a2e12c5e620b09f53e65bcd87550d2e5a1e2e04bf8ba991dcc55113876397
size 20422396

View File

@ -1,18 +0,0 @@
-----BEGIN PGP SIGNATURE-----
iQKTBAABCgB9FiEEcWlgX2LHUTVtBUomqCHmgOX6YwUFAmayiFtfFIAAAAAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDcx
Njk2MDVGNjJDNzUxMzU2RDA1NEEyNkE4MjFFNjgwRTVGQTYzMDUACgkQqCHmgOX6
YwUr4g//VyVs9tvbtiSp8pGe8f1gYErEw54r124sL/CBuNii8Irts1j5ymGxcm+l
hshPK5UlqRnhd5dCJWFTvLTXa5Ko2R1L3JyyxfGd1hmDuMhrWsDHijI0R7L/mGM5
6X2LTaadBVNvk8HaNKvR8SEWvo68rdnOuYElFA9ir7uqwjO26ZWz9FfH80YDGwo8
Blef2NYw8rNhiaZMFV0HYV7D+YyUAZnFNfW8M7Fd4oskUyj1tD9J89T9FFLYN09d
BcCIf+EdiEfqRpKxH89bW2g52kDrm4jYGONtpyF8eruyS3YwYSbvbuWioBYKmlxC
s51mieXz6G325GTZnmPxLek3ywPv6Gil9y0wH3fIr2BsWsmXust4LBpjDGt56Fy6
seokGBg8xzsBSk3iEqNoFmNsy/QOiuCcDejX4XqBDNodOlETQPJb07TkTI2iOmg9
NG4Atiz1HvGVxK68UuK9IIcNHyaWUmH8h4VQFGvc6KV6feP5Nm21Y12PZ5XIqJBO
Y8M/VJIJ5koaNPQfnBbbI5YBkUr4BVpIXIpY5LM/L5sUo2C3R7hMi0VGK88HGfSQ
KV4JmZgf6RMBNmrWY12sryS1QQ6q3P110GTUGQWB3sxxNbhmfcrK+4viqHc83yDz
ifmk33HuqaQGU7OzUMHeNcoCJIPo3H1FpoHOn9wLLCtA1pT+as4=
=t0Rk
-----END PGP SIGNATURE-----

3
Python-3.12.6.tar.xz Normal file
View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1999658298cf2fb837dffed8ff3c033ef0c98ef20cf73c5d5f66bed5ab89697c
size 20434028

18
Python-3.12.6.tar.xz.asc Normal file
View File

@ -0,0 +1,18 @@
-----BEGIN PGP SIGNATURE-----
iQKTBAABCgB9FiEEcWlgX2LHUTVtBUomqCHmgOX6YwUFAmbbZv1fFIAAAAAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDcx
Njk2MDVGNjJDNzUxMzU2RDA1NEEyNkE4MjFFNjgwRTVGQTYzMDUACgkQqCHmgOX6
YwV51g//WpQjF/Rt19lgaWojZ3qDkvmTM2kpvfDGLe9Tkm1fWYzji4TLS3TnzGEp
dw2K6ApqGX4aO9AKMBRdfiFyhaDp0ENlBzSspvyzVT4LsRxiWXqyJ1qZB9mui/S8
k5pw2qygaS4gYEAOLrVEwmQ52pig1wMAouSmRuopVk5DGYUN6Wir7RZMrYynsd6P
6HYqpZby2L1fKlcj2xYY44niuL5a+I8ucWN9qOBzRLCuzq20lVoII817vORjCqa7
ZUMKrDXDlzHnISNqZyyX37/oi6a8UdNm0o8V9yDJLiBu9+Dy3OueoIguuzimk4hq
ZnepBoCcr2YAxIsXvwl2qfQBOCjJ5WAZ/wzA/eQMo9Jn1TYRBwuMC1MmP0ylt7Me
/pS57bGuulkfPv9pMto7qc2lNpotBmAsfGJAJMczeEmyo5tAXnJUBE94JRmiLaR/
zwPmJB3O9uEQhEa8+cjx4+9bK6+YvAXkb9x92Wn70u+IaalPj8CRA7B45hy1KYHT
5s/ndwFFWThxqxH61oqjPvlZW5PWBC83yi/KovhgDWNL2G9CTusKevMqX/LMUAKz
M3mJU/24vUu9bJNxB2qsa3UeP8hbb6WN5LLQyxRsXPjVQ9iTeMWrQkz1mRGIK2Ec
OFbYH2SGa/RELds6MZWzvZuXIefCoyuc51WsXnc4LFr2ZGv4dAI=
=CK1W
-----END PGP SIGNATURE-----

374
doc-py38-to-py36.patch Normal file
View File

@ -0,0 +1,374 @@
---
Doc/conf.py | 4 +-
Doc/tools/check-warnings.py | 3 +
Doc/tools/extensions/audit_events.py | 54 ++++++++++++++++----------------
Doc/tools/extensions/c_annotations.py | 33 ++++++++-----------
Doc/tools/extensions/glossary_search.py | 10 +----
Doc/tools/extensions/patchlevel.py | 9 ++---
6 files changed, 55 insertions(+), 58 deletions(-)
--- a/Doc/conf.py
+++ b/Doc/conf.py
@@ -76,7 +76,7 @@ today_fmt = '%B %d, %Y'
highlight_language = 'python3'
# Minimum version of sphinx required
-needs_sphinx = '6.2.1'
+needs_sphinx = '4.2.0'
# Create table of contents entries for domain objects (e.g. functions, classes,
# attributes, etc.). Default is True.
@@ -328,7 +328,7 @@ html_short_title = f'{release} Documenta
# (See .readthedocs.yml and https://docs.readthedocs.io/en/stable/reference/environment-variables.html)
is_deployment_preview = os.getenv("READTHEDOCS_VERSION_TYPE") == "external"
repository_url = os.getenv("READTHEDOCS_GIT_CLONE_URL", "")
-repository_url = repository_url.removesuffix(".git")
+repository_url = repository_url[:-len(".git")]
html_context = {
"is_deployment_preview": is_deployment_preview,
"repository_url": repository_url or None,
--- a/Doc/tools/check-warnings.py
+++ b/Doc/tools/check-warnings.py
@@ -228,7 +228,8 @@ def fail_if_regression(
print(filename)
for warning in warnings:
if filename in warning:
- if match := WARNING_PATTERN.fullmatch(warning):
+ match = WARNING_PATTERN.fullmatch(warning)
+ if match:
print(" {line}: {msg}".format_map(match))
return -1
return 0
--- a/Doc/tools/extensions/audit_events.py
+++ b/Doc/tools/extensions/audit_events.py
@@ -1,9 +1,6 @@
"""Support for documenting audit events."""
-from __future__ import annotations
-
import re
-from typing import TYPE_CHECKING
from docutils import nodes
from sphinx.errors import NoUri
@@ -12,12 +9,11 @@ from sphinx.transforms.post_transforms i
from sphinx.util import logging
from sphinx.util.docutils import SphinxDirective
-if TYPE_CHECKING:
- from collections.abc import Iterator
+from typing import Any, List, Tuple
- from sphinx.application import Sphinx
- from sphinx.builders import Builder
- from sphinx.environment import BuildEnvironment
+from sphinx.application import Sphinx
+from sphinx.builders import Builder
+from sphinx.environment import BuildEnvironment
logger = logging.getLogger(__name__)
@@ -32,16 +28,16 @@ _SYNONYMS = [
class AuditEvents:
def __init__(self) -> None:
- self.events: dict[str, list[str]] = {}
- self.sources: dict[str, list[tuple[str, str]]] = {}
+ self.events: dict[str, List[str]] = {}
+ self.sources: dict[str, List[Tuple[str, str]]] = {}
- def __iter__(self) -> Iterator[tuple[str, list[str], tuple[str, str]]]:
+ def __iter__(self) -> Any:
for name, args in self.events.items():
for source in self.sources[name]:
yield name, args, source
def add_event(
- self, name, args: list[str], source: tuple[str, str]
+ self, name, args: List[str], source: Tuple[str, str]
) -> None:
if name in self.events:
self._check_args_match(name, args)
@@ -49,7 +45,7 @@ class AuditEvents:
self.events[name] = args
self.sources.setdefault(name, []).append(source)
- def _check_args_match(self, name: str, args: list[str]) -> None:
+ def _check_args_match(self, name: str, args: List[str]) -> None:
current_args = self.events[name]
msg = (
f"Mismatched arguments for audit-event {name}: "
@@ -60,7 +56,7 @@ class AuditEvents:
if len(current_args) != len(args):
logger.warning(msg)
return
- for a1, a2 in zip(current_args, args, strict=False):
+ for a1, a2 in zip(current_args, args):
if a1 == a2:
continue
if any(a1 in s and a2 in s for s in _SYNONYMS):
@@ -73,7 +69,7 @@ class AuditEvents:
name_clean = re.sub(r"\W", "_", name)
return f"audit_event_{name_clean}_{source_count}"
- def rows(self) -> Iterator[tuple[str, list[str], list[tuple[str, str]]]]:
+ def rows(self) -> Any:
for name in sorted(self.events.keys()):
yield name, self.events[name], self.sources[name]
@@ -97,7 +93,7 @@ def audit_events_purge(
def audit_events_merge(
app: Sphinx,
env: BuildEnvironment,
- docnames: list[str],
+ docnames: List[str],
other: BuildEnvironment,
) -> None:
"""In Sphinx parallel builds, this merges audit_events from subprocesses."""
@@ -126,14 +122,16 @@ class AuditEvent(SphinxDirective):
),
]
- def run(self) -> list[nodes.paragraph]:
+ def run(self) -> List[nodes.paragraph]:
+ def _no_walrus_op(args):
+ for arg in args.strip("'\"").split(","):
+ aarg = arg.strip()
+ if aarg:
+ yield aarg
+
name = self.arguments[0]
if len(self.arguments) >= 2 and self.arguments[1]:
- args = [
- arg
- for argument in self.arguments[1].strip("'\"").split(",")
- if (arg := argument.strip())
- ]
+ args = list(_no_walrus_op(self.arguments[1]))
else:
args = []
ids = []
@@ -169,7 +167,7 @@ class audit_event_list(nodes.General, no
class AuditEventListDirective(SphinxDirective):
- def run(self) -> list[audit_event_list]:
+ def run(self) -> List[audit_event_list]:
return [audit_event_list()]
@@ -181,7 +179,11 @@ class AuditEventListTransform(SphinxPost
return
table = self._make_table(self.app.builder, self.env.docname)
- for node in self.document.findall(audit_event_list):
+ try:
+ findall = self.document.findall
+ except AttributeError:
+ findall = self.document.traverse
+ for node in findall(audit_event_list):
node.replace_self(table)
def _make_table(self, builder: Builder, docname: str) -> nodes.table:
@@ -217,8 +219,8 @@ class AuditEventListTransform(SphinxPost
builder: Builder,
docname: str,
name: str,
- args: list[str],
- sources: list[tuple[str, str]],
+ args: List[str],
+ sources: List[Tuple[str, str]],
) -> nodes.row:
row = nodes.row()
name_node = nodes.paragraph("", nodes.Text(name))
--- a/Doc/tools/extensions/c_annotations.py
+++ b/Doc/tools/extensions/c_annotations.py
@@ -9,12 +9,10 @@ Configuration:
* Set ``stable_abi_file`` to the path to stable ABI list.
"""
-from __future__ import annotations
-
import csv
import dataclasses
from pathlib import Path
-from typing import TYPE_CHECKING
+from typing import Any, Dict, List, TYPE_CHECKING, Union
import sphinx
from docutils import nodes
@@ -23,9 +21,7 @@ from sphinx import addnodes
from sphinx.locale import _ as sphinx_gettext
from sphinx.util.docutils import SphinxDirective
-if TYPE_CHECKING:
- from sphinx.application import Sphinx
- from sphinx.util.typing import ExtensionMetadata
+from sphinx.application import Sphinx
ROLE_TO_OBJECT_TYPE = {
"func": "function",
@@ -36,20 +32,20 @@ ROLE_TO_OBJECT_TYPE = {
}
-@dataclasses.dataclass(slots=True)
+@dataclasses.dataclass()
class RefCountEntry:
# Name of the function.
name: str
# List of (argument name, type, refcount effect) tuples.
# (Currently not used. If it was, a dataclass might work better.)
- args: list = dataclasses.field(default_factory=list)
+ args: List = dataclasses.field(default_factory=list)
# Return type of the function.
result_type: str = ""
# Reference count effect for the return value.
- result_refs: int | None = None
+ result_refs: Union[int, None] = None
-@dataclasses.dataclass(frozen=True, slots=True)
+@dataclasses.dataclass(frozen=True)
class StableABIEntry:
# Role of the object.
# Source: Each [item_kind] in stable_abi.toml is mapped to a C Domain role.
@@ -68,7 +64,7 @@ class StableABIEntry:
struct_abi_kind: str
-def read_refcount_data(refcount_filename: Path) -> dict[str, RefCountEntry]:
+def read_refcount_data(refcount_filename: Path) -> Dict[str, RefCountEntry]:
refcount_data = {}
refcounts = refcount_filename.read_text(encoding="utf8")
for line in refcounts.splitlines():
@@ -104,7 +100,7 @@ def read_refcount_data(refcount_filename
return refcount_data
-def read_stable_abi_data(stable_abi_file: Path) -> dict[str, StableABIEntry]:
+def read_stable_abi_data(stable_abi_file: Path) -> Dict[str, StableABIEntry]:
stable_abi_data = {}
with open(stable_abi_file, encoding="utf8") as fp:
for record in csv.DictReader(fp):
@@ -135,7 +131,8 @@ def add_annotations(app: Sphinx, doctree
objtype = par["objtype"]
# Stable ABI annotation.
- if record := stable_abi_data.get(name):
+ record = stable_abi_data.get(name)
+ if record:
if ROLE_TO_OBJECT_TYPE[record.role] != objtype:
msg = (
f"Object type mismatch in limited API annotation for {name}: "
@@ -242,7 +239,7 @@ def _unstable_api_annotation() -> nodes.
)
-def _return_value_annotation(result_refs: int | None) -> nodes.emphasis:
+def _return_value_annotation(result_refs: Union[int, None]) -> nodes.emphasis:
classes = ["refcount"]
if result_refs is None:
rc = sphinx_gettext("Return value: Always NULL.")
@@ -262,7 +259,7 @@ class LimitedAPIList(SphinxDirective):
optional_arguments = 0
final_argument_whitespace = True
- def run(self) -> list[nodes.Node]:
+ def run(self) -> List[nodes.Node]:
state = self.env.domaindata["c_annotations"]
content = [
f"* :c:{record.role}:`{record.name}`"
@@ -285,7 +282,7 @@ def init_annotations(app: Sphinx) -> Non
)
-def setup(app: Sphinx) -> ExtensionMetadata:
+def setup(app: Sphinx) -> Any:
app.add_config_value("refcount_file", "", "env", types={str})
app.add_config_value("stable_abi_file", "", "env", types={str})
app.add_directive("limited-api-list", LimitedAPIList)
@@ -297,10 +294,10 @@ def setup(app: Sphinx) -> ExtensionMetad
from sphinx.domains.c import CObject
# monkey-patch C object...
- CObject.option_spec |= {
+ CObject.option_spec.update({
"no-index-entry": directives.flag,
"no-contents-entry": directives.flag,
- }
+ })
return {
"version": "1.0",
--- a/Doc/tools/extensions/glossary_search.py
+++ b/Doc/tools/extensions/glossary_search.py
@@ -1,18 +1,14 @@
"""Feature search results for glossary items prominently."""
-from __future__ import annotations
-
import json
from pathlib import Path
-from typing import TYPE_CHECKING
+from typing import Any, TYPE_CHECKING
from docutils import nodes
from sphinx.addnodes import glossary
from sphinx.util import logging
-if TYPE_CHECKING:
- from sphinx.application import Sphinx
- from sphinx.util.typing import ExtensionMetadata
+from sphinx.application import Sphinx
logger = logging.getLogger(__name__)
@@ -60,7 +56,7 @@ def write_glossary_json(app: Sphinx, _ex
dest.write_text(json.dumps(app.env.glossary_terms), encoding='utf-8')
-def setup(app: Sphinx) -> ExtensionMetadata:
+def setup(app: Sphinx) -> Any:
app.connect('doctree-resolved', process_glossary_nodes)
app.connect('build-finished', write_glossary_json)
--- a/Doc/tools/extensions/patchlevel.py
+++ b/Doc/tools/extensions/patchlevel.py
@@ -3,7 +3,7 @@
import re
import sys
from pathlib import Path
-from typing import Literal, NamedTuple
+from typing import NamedTuple, Tuple
CPYTHON_ROOT = Path(
__file__, # cpython/Doc/tools/extensions/patchlevel.py
@@ -26,7 +26,7 @@ class version_info(NamedTuple): # noqa:
major: int #: Major release number
minor: int #: Minor release number
micro: int #: Patch release number
- releaselevel: Literal["alpha", "beta", "candidate", "final"]
+ releaselevel: str
serial: int #: Serial release number
@@ -37,7 +37,8 @@ def get_header_version_info() -> version
defines = {}
patchlevel_h = PATCHLEVEL_H.read_text(encoding="utf-8")
for line in patchlevel_h.splitlines():
- if (m := pat.match(line)) is not None:
+ m = pat.match(line)
+ if m is not None:
name, value = m.groups()
defines[name] = value
@@ -50,7 +51,7 @@ def get_header_version_info() -> version
)
-def format_version_info(info: version_info) -> tuple[str, str]:
+def format_version_info(info: version_info) -> Tuple[str, str]:
version = f"{info.major}.{info.minor}"
release = f"{info.major}.{info.minor}.{info.micro}"
if info.releaselevel != "final":

View File

@ -1,7 +1,9 @@
Index: Python-3.12.3/Lib/test/test_compile.py
===================================================================
--- Python-3.12.3.orig/Lib/test/test_compile.py
+++ Python-3.12.3/Lib/test/test_compile.py
---
Lib/test/test_compile.py | 5 +++++
1 file changed, 5 insertions(+)
--- a/Lib/test/test_compile.py
+++ b/Lib/test/test_compile.py
@@ -14,6 +14,9 @@ from test.support import (script_helper,
requires_specialization, C_RECURSION_LIMIT)
from test.support.os_helper import FakePath

View File

@ -21,7 +21,7 @@
Create a Python.framework rather than a traditional Unix install. Optional
--- a/Misc/NEWS
+++ b/Misc/NEWS
@@ -13832,7 +13832,7 @@ C API
@@ -13974,7 +13974,7 @@ C API
- bpo-40939: Removed documentation for the removed ``PyParser_*`` C API.
- bpo-43795: The list in :ref:`limited-api-list` now shows the public name

View File

@ -1,3 +1,114 @@
-------------------------------------------------------------------
Fri Sep 13 17:09:37 UTC 2024 - Matej Cepl <mcepl@cepl.eu>
- Add doc-py38-to-py36.patch making building documentation
compatible with Python 3.6, which runs Sphinx on SLE.
-------------------------------------------------------------------
Sat Sep 7 21:49:34 UTC 2024 - Matej Cepl <mcepl@cepl.eu>
- Update to 3.12.6:
- Tests
- gh-101525: Skip test_gdb if the binary is relocated by
BOLT. Patch by Donghee Na.
- Security
- gh-123678: Upgrade libexpat to 2.6.3
- gh-121285: Remove backtracking from tarfile header parsing
for hdrcharset, PAX, and GNU sparse headers (bsc#1230227,
CVE-2024-6232).
- Library
- gh-123270: Applied a more surgical fix for malformed
payloads in zipfile.Path causing infinite loops (gh-122905)
without breaking contents using legitimate characters
(bsc#1229704, CVE-2024-8088).
- gh-123213: xml.etree.ElementTree.Element.extend() and
Element assignment no longer hide the internal exception if
an erronous generator is passed. Patch by Bar Harel.
- gh-85110: Preserve relative path in URL without netloc in
urllib.parse.urlunsplit() and urllib.parse.urlunparse().
- gh-123067: Fix quadratic complexity in parsing "-quoted
cookie values with backslashes by http.cookies
(bsc#1229596, CVE-2024-7592)
- gh-122903: zipfile.Path.glob now correctly matches
directories instead of silently omitting them.
- gh-122905: zipfile.Path objects now sanitize names from the
zipfile.
- gh-122695: Fixed double-free when using gc.get_referents()
with a freed asyncio.Future iterator.
- gh-116263: logging.handlers.RotatingFileHandler no longer
rolls over empty log files.
- gh-118814: Fix the typing.TypeVar constructor when name is
passed by keyword.
- gh-122478: Remove internal frames from tracebacks
shown in code.InteractiveInterpreter with non-default
sys.excepthook(). Save correct tracebacks in
sys.last_traceback and update __traceback__ attribute of
sys.last_value and sys.last_exc.
- gh-113785: csv now correctly parses numeric fields (when
used with csv.QUOTE_NONNUMERIC) which start with an escape
character.
- gh-112182: asyncio.futures.Future.set_exception() now
transforms StopIteration into RuntimeError instead of
hanging or other misbehavior. Patch contributed by Jamie
Phan.
- gh-108172: webbrowser honors OS preferred browser on Linux
when its desktop entry name contains the text of a known
browser name.
- gh-102988: email.utils.getaddresses() and
email.utils.parseaddr() now return ('', '') 2-tuples
in more situations where invalid email addresses are
encountered instead of potentially inaccurate values. Add
optional strict parameter to these two functions: use
strict=False to get the old behavior, accept malformed
inputs. getattr(email.utils, 'supports_strict_parsing',
False) can be use to check if the strict paramater is
available. Patch by Thomas Dwyer and Victor Stinner to
improve the CVE-2023-27043 fix.
- gh-99437: runpy.run_path() now decodes path-like objects,
making sure __file__ and sys.argv[0] of the module being
run are always strings.
- IDLE
- gh-120083: Add explicit black IDLE Hovertip foreground
color needed for recent macOS. Fixes Sonoma showing
unreadable white on pale yellow. Patch by John Riggles.
- Core and Builtins
- gh-123321: Prevent Parser/myreadline race condition from
segfaulting on multi-threaded use. Patch by Bar Harel and
Amit Wienner.
- gh-122982: Extend the deprecation period for bool inversion
(~) by two years.
- gh-123229: Fix valgrind warning by initializing the
f-string buffers to 0 in the tokenizer. Patch by Pablo
Galindo
- gh-123142: Fix too-wide source location in exception
tracebacks coming from broken iterables in comprehensions.
- gh-123048: Fix a bug where pattern matching code could emit
a JUMP_FORWARD with no source location.
- gh-123083: Fix a potential use-after-free in
STORE_ATTR_WITH_HINT.
- gh-122527: Fix a crash that occurred when a
PyStructSequence was deallocated after its types
dictionary was cleared by the GC. The types tp_basicsize
now accounts for non-sequence fields that arent included
in the Py_SIZE of the sequence.
- gh-93691: Fix source locations of instructions generated
for with statements.
- Build
- gh-123297: Propagate the value of LDFLAGS to LDCXXSHARED in
sysconfig. Patch by Pablo Galindo
- Remove upstreamed patches:
- CVE-2023-27043-email-parsing-errors.patch
- CVE-2024-8088-inf-loop-zipfile_Path.patch
- CVE-2023-6597-TempDir-cleaning-symlink.patch
- gh120226-fix-sendfile-test-kernel-610.patch
-------------------------------------------------------------------
Mon Sep 2 09:44:26 UTC 2024 - Matej Cepl <mcepl@cepl.eu>
- Add gh120226-fix-sendfile-test-kernel-610.patch to avoid
failing test_sendfile_close_peer_in_the_middle_of_receiving
tests on Linux >= 6.10 (GH-120227).
-------------------------------------------------------------------
Wed Aug 28 16:54:34 UTC 2024 - Matej Cepl <mcepl@cepl.eu>

View File

@ -110,7 +110,7 @@
# _md5.cpython-38m-x86_64-linux-gnu.so
%define dynlib() %{sitedir}/lib-dynload/%{1}.cpython-%{abi_tag}-%{archname}-%{_os}%{?_gnu}%{?armsuffix}.so
Name: %{python_pkg_name}%{psuffix}
Version: 3.12.5
Version: 3.12.6
Release: 0
Summary: Python 3 Interpreter
License: Python-2.0
@ -168,13 +168,6 @@ Patch34: skip-test_pyobject_freed_is_freed.patch
# PATCH-FIX-SLE fix_configure_rst.patch bpo#43774 mcepl@suse.com
# remove duplicate link targets and make documentation with old Sphinx in SLE
Patch35: fix_configure_rst.patch
# PATCH-FIX-UPSTREAM CVE-2023-27043-email-parsing-errors.patch bsc#1210638 mcepl@suse.com
# Detect email address parsing errors and return empty tuple to
# indicate the parsing error (old API)
Patch36: CVE-2023-27043-email-parsing-errors.patch
# PATCH-FIX-UPSTREAM CVE-2023-6597-TempDir-cleaning-symlink.patch bsc#1219666 mcepl@suse.com
# tempfile.TemporaryDirectory: fix symlink bug in cleanup (from gh#python/cpython!99930)
Patch38: CVE-2023-6597-TempDir-cleaning-symlink.patch
# PATCH-FIX-OPENSUSE CVE-2023-52425-libexpat-2.6.0-backport-15.6.patch
# This problem on libexpat is patched on 15.6 without version
# update, this patch changes the tests to match the libexpat provided
@ -186,9 +179,9 @@ Patch40: fix-test-recursion-limit-15.6.patch
# PATCH-FIX-SLE docs-docutils_014-Sphinx_420.patch bsc#[0-9]+ mcepl@suse.com
# related to gh#python/cpython#119317
Patch41: docs-docutils_014-Sphinx_420.patch
# PATCH-FIX-UPSTREAM CVE-2024-8088-inf-loop-zipfile_Path.patch bsc#1229704 mcepl@suse.com
# avoid denial of service in zipfile
Patch42: CVE-2024-8088-inf-loop-zipfile_Path.patch
# PATCH-FIX-SLE doc-py38-to-py36.patch mcepl@suse.com
# Make documentation extensions working with Python 3.6
Patch44: doc-py38-to-py36.patch
BuildRequires: autoconf-archive
BuildRequires: automake
BuildRequires: fdupes
@ -219,6 +212,9 @@ BuildRequires: mpdecimal-devel
BuildRequires: python3-Sphinx >= 4.0.0
%if 0%{?suse_version} >= 1500
BuildRequires: python3-python-docs-theme >= 2022.1
%if 0%{?suse_version} < 1599
BuildRequires: python3-dataclasses
%endif
%endif
%endif
%if %{with general}
@ -480,7 +476,7 @@ rm Lib/site-packages/README.txt
tar xvf %{SOURCE21}
# Don't fail on warnings when building documentation
# sed -i -e '/^SPHINXERRORHANDLING/s/-W//' Doc/Makefile
sed -i -e '/^SPHINXERRORHANDLING/s/-W//' Doc/Makefile
%build
%if %{with doc}