8
0

33 Commits

Author SHA256 Message Date
c4952f9b7f Accepting request 1297601 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1297601
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=14
2025-08-05 12:21:50 +00:00
f4a3ff2dbe Accepting request 1293988 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1293988
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=13
2025-07-17 15:18:56 +00:00
ae31662aab - Update to docker-buildx v0.25.0. Upstream changelog:
<https://github.com/docker/buildx/releases/tag/v0.25.0>
- Update to Go 1.23 for building now that upstream has switched their 23.0.x
  LTSS to use Go 1.23.

OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=31
2025-07-17 04:31:09 +00:00
451c8ce3cb Accepting request 1284722 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1284722
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=12
2025-07-01 09:34:07 +00:00
84dfc0f999 Accepting request 1284721 from home:cyphar:docker
- Patches included from snapshot:
  + 0001-SECRETS-daemon-allow-directory-creation-in-run-secre.patch
  + 0002-SECRETS-SUSE-implement-SUSE-container-secrets.patch
  + 0003-BUILD-SLE12-revert-graphdriver-btrfs-use-kernel-UAPI.patch
  + 0004-bsc1073877-apparmor-clobber-docker-default-profile-o.patch
  + 0005-SLE12-revert-apparmor-remove-version-conditionals-fr.patch
  + 0006-CVE-2024-23653-update-buildkit-to-include-CVE-patche.patch
  + cli-0001-docs-include-required-tools-in-source-tree.patch

OBS-URL: https://build.opensuse.org/request/show/1284721
OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=29
2025-06-11 08:30:48 +00:00
a5826f5486 Accepting request 1283417 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1283417
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=11
2025-06-06 20:41:49 +00:00
bd8116a690 - Do not try to inject SUSEConnect secrets when in Rootless Docker mode, as
Docker does not have permission to access the host zypper credentials in this
  mode (and unprivileged users cannot disable the feature using
  /etc/docker/suse-secrets-enable.) bsc#1240150

  * 0003-SECRETS-SUSE-implement-SUSE-container-secrets.patch

- Rebase patches:
  * 0001-SECRETS-SUSE-always-clear-our-internal-secrets.patch
  * 0002-SECRETS-daemon-allow-directory-creation-in-run-secre.patch
  * 0004-BUILD-SLE12-revert-graphdriver-btrfs-use-kernel-UAPI.patch
  * 0005-bsc1073877-apparmor-clobber-docker-default-profile-o.patch
  * 0006-SLE12-revert-apparmor-remove-version-conditionals-fr.patch
  * 0007-CVE-2024-2365x-update-buildkit-to-include-CVE-patche.patch
  * 0008-bsc1221916-update-to-patched-buildkit-version-to-fix.patch
  * 0009-bsc1214855-volume-use-AtomicWriteFile-to-save-volume.patch
  * 0010-CVE-2024-41110-AuthZ-plugin-securty-fixes.patch
  * 0011-CVE-2024-29018-libnet-Don-t-forward-to-upstream-reso.patch
  * 0012-CVE-2025-22868-vendor-jws-split-token-into-fixed-num.patch
  * 0013-CVE-2025-22869-vendor-ssh-limit-the-size-of-the-inte.patch
  * 0014-TESTS-backport-fixes-for-integration-tests.patch

OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=27
2025-06-05 16:35:01 +00:00
8461728396 Accepting request 1282505 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1282505
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=10
2025-06-04 18:28:15 +00:00
bb577e6225 - Always clear SUSEConnect suse_* secrets when starting containers regardless
of whether the daemon was built with SUSEConnect support. Not doing this
  causes containers from SUSEConnect-enabled daemons to fail to start when
  running with SUSEConnect-disabled (i.e. upstream) daemons.
  This was a long-standing issue with our secrets support but until recently
  this would've required migrating from SLE packages to openSUSE packages
  (which wasn't supported). However, as SLE Micro 6.x and SLES 16 will move
  away from in-built SUSEConnect support, this is now a practical issue users
  will run into. bsc#1244035
  + 0001-SECRETS-SUSE-always-clear-our-internal-secrets.patch
- Rearrange patches:
  - 0001-SECRETS-daemon-allow-directory-creation-in-run-secre.patch
  + 0002-SECRETS-daemon-allow-directory-creation-in-run-secre.patch
  - 0002-SECRETS-SUSE-implement-SUSE-container-secrets.patch
  + 0003-SECRETS-SUSE-implement-SUSE-container-secrets.patch
  - 0003-BUILD-SLE12-revert-graphdriver-btrfs-use-kernel-UAPI.patch
  + 0004-BUILD-SLE12-revert-graphdriver-btrfs-use-kernel-UAPI.patch
  - 0004-bsc1073877-apparmor-clobber-docker-default-profile-o.patch
  + 0005-bsc1073877-apparmor-clobber-docker-default-profile-o.patch
  - 0005-SLE12-revert-apparmor-remove-version-conditionals-fr.patch
  + 0006-SLE12-revert-apparmor-remove-version-conditionals-fr.patch
  - 0006-CVE-2024-2365x-update-buildkit-to-include-CVE-patche.patch
  + 0007-CVE-2024-2365x-update-buildkit-to-include-CVE-patche.patch
  - 0007-bsc1221916-update-to-patched-buildkit-version-to-fix.patch
  + 0008-bsc1221916-update-to-patched-buildkit-version-to-fix.patch
  - 0008-bsc1214855-volume-use-AtomicWriteFile-to-save-volume.patch
  + 0009-bsc1214855-volume-use-AtomicWriteFile-to-save-volume.patch
  - 0009-CVE-2024-41110-AuthZ-plugin-securty-fixes.patch
  + 0010-CVE-2024-41110-AuthZ-plugin-securty-fixes.patch
  - 0010-CVE-2024-29018-libnet-Don-t-forward-to-upstream-reso.patch
  + 0011-CVE-2024-29018-libnet-Don-t-forward-to-upstream-reso.patch
  - 0011-CVE-2025-22868-vendor-jws-split-token-into-fixed-num.patch
  + 0012-CVE-2025-22868-vendor-jws-split-token-into-fixed-num.patch
  - 0012-CVE-2025-22869-vendor-ssh-limit-the-size-of-the-inte.patch
  + 0013-CVE-2025-22869-vendor-ssh-limit-the-size-of-the-inte.patch
  - 0013-TESTS-backport-fixes-for-integration-tests.patch
  + 0014-TESTS-backport-fixes-for-integration-tests.patch

OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=25
2025-06-04 06:14:16 +00:00
bdfa56d393 Accepting request 1268265 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1268265
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=9
2025-04-10 19:59:20 +00:00
47dc4f48fa - Update to docker-buildx v0.22.0. Upstream changelog:
<https://github.com/docker/buildx/releases/tag/v0.22.0>
  * Includes fixes for CVE-2025-0495. bsc#1239765
- Disable transparent SUSEConnect support for SLE-16. PED-12534
  When this patchset was first added in 2013 (and rewritten over the years),
  there was no upstream way to easily provide SLE customers with a way to build
  container images based on SLE using the host subscription. However, with
  docker-buildx you can now define secrets for builds (this is not entirely
  transparent, but we can easily document this new requirement for SLE-16).
  Users should use
    RUN --mount=type=secret,id=SCCcredentials zypper -n ...
  in their Dockerfiles, and
    docker buildx build --secret id=SCCcredentials,src=/etc/zypp/credentials.d/SCCcredentials,type=file .
  when doing their builds.
- Now that the only blocker for docker-buildx support was removed for SLE-16,
  enable docker-buildx for SLE-16 as well. PED-8905

OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=23
2025-04-10 03:37:04 +00:00
3b21671934 Accepting request 1256097 from home:cyphar:docker
- Don't use the new container-selinux conditional requires on SLE-12, as the
  RPM version there doesn't support it. Arguably the change itself is a bit
  suspect but we can fix that later. bsc#1237367
- Make container-selinux requirement conditional on selinux-policy
  (bsc#1237367)

OBS-URL: https://build.opensuse.org/request/show/1256097
OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=22
2025-03-26 02:43:22 +00:00
87bc6e5edc Accepting request 1255774 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1255774
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=8
2025-03-25 21:11:17 +00:00
9e69e34cc5 - Add backport for golang.org/x/oauth2 CVE-2025-22868 fix. bsc#1239185
+ 0011-CVE-2025-22868-vendor-jws-split-token-into-fixed-num.patch
- Add backport for golang.org/x/crypto CVE-2025-22869 fix. bsc#1239322
  + 0012-CVE-2025-22869-vendor-ssh-limit-the-size-of-the-inte.patch
- Refresh patches:
  * 0001-SECRETS-daemon-allow-directory-creation-in-run-secre.patch
  * 0002-SECRETS-SUSE-implement-SUSE-container-secrets.patch
  * 0003-BUILD-SLE12-revert-graphdriver-btrfs-use-kernel-UAPI.patch
  * 0004-bsc1073877-apparmor-clobber-docker-default-profile-o.patch
  * 0005-SLE12-revert-apparmor-remove-version-conditionals-fr.patch
  * 0006-CVE-2024-2365x-update-buildkit-to-include-CVE-patche.patch
  * 0007-bsc1221916-update-to-patched-buildkit-version-to-fix.patch
  * 0008-bsc1214855-volume-use-AtomicWriteFile-to-save-volume.patch
  * 0009-CVE-2024-41110-AuthZ-plugin-securty-fixes.patch
  * 0010-CVE-2024-29018-libnet-Don-t-forward-to-upstream-reso.patch
- Move test-related patch to the end of the patch stack:
  - 0011-TESTS-backport-fixes-for-integration-tests.patch
  + 0013-TESTS-backport-fixes-for-integration-tests.patch

OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=20
2025-03-25 04:02:47 +00:00
9c336ff601 Accepting request 1237207 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1237207
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=7
2025-01-13 16:50:43 +00:00
1d00d6bb91 Fix changelog.
OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=18
2025-01-13 01:29:23 +00:00
2a6e8f4c54 Accepting request 1231782 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1231782
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=6
2024-12-18 19:09:45 +00:00
c393080e52 - Add backport for CVE-2024-29018 fix. bsc#1234089
+ 0010-CVE-2024-29018-libnet-Don-t-forward-to-upstream-reso.patch
- Add backport for CVE-2024-23650 fix. bsc#1219437
  - 0006-CVE-2024-23653-update-buildkit-to-include-CVE-patche.patch
  + 0006-CVE-2024-2365x-update-buildkit-to-include-CVE-patche.patch
- Reorder and rebase patches:
  * 0001-SECRETS-daemon-allow-directory-creation-in-run-secre.patch
  * 0002-SECRETS-SUSE-implement-SUSE-container-secrets.patch
  * 0003-BUILD-SLE12-revert-graphdriver-btrfs-use-kernel-UAPI.patch
  * 0004-bsc1073877-apparmor-clobber-docker-default-profile-o.patch
  * 0005-SLE12-revert-apparmor-remove-version-conditionals-fr.patch
  * 0007-bsc1221916-update-to-patched-buildkit-version-to-fix.patch
  * 0008-bsc1214855-volume-use-AtomicWriteFile-to-save-volume.patch
  * 0009-CVE-2024-41110-AuthZ-plugin-securty-fixes.patch
  - 0010-TESTS-backport-fixes-for-integration-tests.patch
  + 0011-TESTS-backport-fixes-for-integration-tests.patch

OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=16
2024-12-18 06:26:49 +00:00
c27b8c2d8f Accepting request 1231697 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1231697
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=5
2024-12-17 18:25:20 +00:00
0380cf68a8 Accepting request 1231695 from home:cyphar:docker
- Update to docker-buildx 0.19.3. See upstream changelog online at
  <https://github.com/docker/buildx/releases/tag/v0.19.3>

OBS-URL: https://build.opensuse.org/request/show/1231695
OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=14
2024-12-17 13:26:31 +00:00
0b754a6ceb Accepting request 1230150 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1230150
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=4
2024-12-12 20:17:51 +00:00
ff3bcb3eda Remove DOCKER_SUSE_SECRETS_ENABLE changelog entry.
OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=12
2024-12-11 15:36:10 +00:00
f61acbec84 - Update docker-buildx to v0.19.2. See upstream changelog online at
<https://github.com/docker/buildx/releases/tag/v0.19.2>.
  Some notable changelogs from the last update:
    * <https://github.com/docker/buildx/releases/tag/v0.19.0>
	* <https://github.com/docker/buildx/releases/tag/v0.18.0>
- Update to Go 1.22.

- Add a new toggle file /etc/docker/suse-secrets-enable which allows users to
  disable the SUSEConnect integration with Docker (which creates special mounts
  in /run/secrets to allow container-suseconnect to authenticate containers
  with registries on registered hosts). bsc#1231348 bsc#1232999
  In order to disable these mounts, just do
    echo 0 > /etc/docker/suse-secrets-enable
  and restart Docker. In order to re-enable them, just do
    echo 1 > /etc/docker/suse-secrets-enable
  and restart Docker. Docker will output information on startup to tell you
  whether the SUSE secrets feature is enabled or not.
  * 0002-SECRETS-SUSE-implement-SUSE-container-secrets.patch

- Add docker-integration-tests-devel subpackage for building and running the
  upstream Docker integration tests on machines to test that Docker works
  properly. Users should not install this package.
- docker-rpmlintrc updated to include allow-list for all of the integration
  tests package, since it contains a bunch of stuff that wouldn't normally be
  allowed.
- Added patches:
  + 0010-TESTS-backport-fixes-for-integration-tests.patch

OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=11
2024-12-11 10:51:10 +00:00
6baeb55273 Accepting request 1228306 from Virtualization:containers
Automatic submission by obs-autosubmit

OBS-URL: https://build.opensuse.org/request/show/1228306
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=3
2024-12-05 16:08:47 +00:00
1a4287f660 - Disable docker-buildx builds for SLES. It turns out that build containers
with docker-buildx don't currently get the SUSE secrets mounts applied,
  meaning that container-suseconnect doesn't work when building images.
  bsc#1233819

OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=9
2024-11-27 12:52:23 +00:00
1d2100e493 Accepting request 1224329 from Virtualization:containers
OBS-URL: https://build.opensuse.org/request/show/1224329
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=2
2024-11-15 14:43:32 +00:00
310b0df6c4 Re-add comment removed by auto-format.
OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=7
2024-11-15 00:49:44 +00:00
a8cee429ef - Remove DOCKER_NETWORK_OPTS from docker.service. This was removed from
sysconfig a long time ago, and apparently this causes issues with systemd in
  some cases.
- Update --add-runtime to point to correct binary path.

OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=6
2024-11-15 00:13:41 +00:00
9e516b4cdf Accepting request 1219925 from Virtualization:containers
Add docker-stable package.

OBS-URL: https://build.opensuse.org/request/show/1219925
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/docker-stable?expand=0&rev=1
2024-11-01 20:04:47 +00:00
1931d76a2c Apply patches properly.
OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=4
2024-10-31 17:47:03 +00:00
de974cbb79 docker.spec -> docker-stable.spec
OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=3
2024-10-30 14:42:40 +00:00
0bcaef05f2 docker.changes -> docker-stable.changes
OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=2
2024-10-30 14:24:16 +00:00
d3d431381b Add docker-stable package.
OBS-URL: https://build.opensuse.org/package/show/Virtualization:containers/docker-stable?expand=0&rev=1
2024-10-18 00:35:19 +00:00
17 changed files with 0 additions and 6067 deletions

View File

@@ -1,74 +0,0 @@
From a94378d92f7ef523b17aa399ce83b27f7986980f Mon Sep 17 00:00:00 2001
From: Aleksa Sarai <asarai@suse.de>
Date: Wed, 8 Mar 2017 12:41:54 +1100
Subject: [PATCH 01/13] SECRETS: daemon: allow directory creation in
/run/secrets
Since FileMode can have the directory bit set, allow a SecretStore
implementation to return secrets that are actually directories. This is
useful for creating directories and subdirectories of secrets.
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
Signed-off-by: Aleksa Sarai <asarai@suse.de>
---
daemon/container_operations_unix.go | 23 ++++++++++++++++++++---
1 file changed, 20 insertions(+), 3 deletions(-)
diff --git a/daemon/container_operations_unix.go b/daemon/container_operations_unix.go
index 290ec59a34a7..b7013fb89c83 100644
--- a/daemon/container_operations_unix.go
+++ b/daemon/container_operations_unix.go
@@ -4,6 +4,7 @@
package daemon // import "github.com/docker/docker/daemon"
import (
+ "bytes"
"fmt"
"os"
"path/filepath"
@@ -14,6 +15,7 @@ import (
"github.com/docker/docker/daemon/links"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/libnetwork"
+ "github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/idtools"
"github.com/docker/docker/pkg/process"
"github.com/docker/docker/pkg/stringid"
@@ -206,9 +208,6 @@ func (daemon *Daemon) setupSecretDir(c *container.Container) (setupErr error) {
if err != nil {
return errors.Wrap(err, "unable to get secret from secret store")
}
- if err := os.WriteFile(fPath, secret.Spec.Data, s.File.Mode); err != nil {
- return errors.Wrap(err, "error injecting secret")
- }
uid, err := strconv.Atoi(s.File.UID)
if err != nil {
@@ -219,6 +218,24 @@ func (daemon *Daemon) setupSecretDir(c *container.Container) (setupErr error) {
return err
}
+ if s.File.Mode.IsDir() {
+ if err := os.Mkdir(fPath, s.File.Mode); err != nil {
+ return errors.Wrap(err, "error creating secretdir")
+ }
+ if secret.Spec.Data != nil {
+ // If the "file" is a directory, then s.File.Data is actually a tar
+ // archive of the directory. So we just do a tar extraction here.
+ if err := archive.UntarUncompressed(bytes.NewBuffer(secret.Spec.Data), fPath, &archive.TarOptions{
+ IDMap: daemon.idMapping,
+ }); err != nil {
+ return errors.Wrap(err, "error injecting secretdir")
+ }
+ }
+ } else {
+ if err := os.WriteFile(fPath, secret.Spec.Data, s.File.Mode); err != nil {
+ return errors.Wrap(err, "error injecting secret")
+ }
+ }
if err := os.Chown(fPath, rootIDs.UID+uid, rootIDs.GID+gid); err != nil {
return errors.Wrap(err, "error setting ownership for secret")
}
--
2.48.1

View File

@@ -1,510 +0,0 @@
From 009cad241857541779baa2a9fae8291597dc85f8 Mon Sep 17 00:00:00 2001
From: Aleksa Sarai <asarai@suse.de>
Date: Wed, 8 Mar 2017 11:43:29 +1100
Subject: [PATCH 02/13] SECRETS: SUSE: implement SUSE container secrets
This allows for us to pass in host credentials to a container, allowing
for SUSEConnect to work with containers.
Users can disable this by setting DOCKER_SUSE_SECRETS_ENABLE=0 in
/etc/sysconfig/docker or by adding that setting to docker.service's
Environment using a drop-in file.
THIS PATCH IS NOT TO BE UPSTREAMED, DUE TO THE FACT THAT IT IS
SUSE-SPECIFIC, AND UPSTREAM DOES NOT APPROVE OF THIS CONCEPT BECAUSE IT
MAKES BUILDS NOT ENTIRELY REPRODUCIBLE.
SUSE-Bugs: bsc#1065609 bsc#1057743 bsc#1055676 bsc#1030702 bsc#1231348
Signed-off-by: Aleksa Sarai <asarai@suse.de>
---
daemon/start.go | 5 +
daemon/suse_secrets.go | 461 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 466 insertions(+)
create mode 100644 daemon/suse_secrets.go
diff --git a/daemon/start.go b/daemon/start.go
index 2e0b9e6be847..dca04486888f 100644
--- a/daemon/start.go
+++ b/daemon/start.go
@@ -151,6 +151,11 @@ func (daemon *Daemon) containerStart(ctx context.Context, container *container.C
return err
}
+ // SUSE:secrets -- inject the SUSE secret store
+ if err := daemon.injectSuseSecretStore(container); err != nil {
+ return errdefs.System(err)
+ }
+
spec, err := daemon.createSpec(ctx, container)
if err != nil {
return errdefs.System(err)
diff --git a/daemon/suse_secrets.go b/daemon/suse_secrets.go
new file mode 100644
index 000000000000..85b37bf46544
--- /dev/null
+++ b/daemon/suse_secrets.go
@@ -0,0 +1,461 @@
+/*
+ * suse-secrets: patch for Docker to implement SUSE secrets
+ * Copyright (C) 2017-2021 SUSE LLC.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package daemon
+
+import (
+ "archive/tar"
+ "bytes"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "strings"
+ "syscall"
+
+ "github.com/docker/docker/container"
+ "github.com/docker/docker/pkg/archive"
+ "github.com/docker/docker/pkg/idtools"
+
+ swarmtypes "github.com/docker/docker/api/types/swarm"
+ swarmexec "github.com/moby/swarmkit/v2/agent/exec"
+ swarmapi "github.com/moby/swarmkit/v2/api"
+
+ "github.com/opencontainers/go-digest"
+ "github.com/sirupsen/logrus"
+)
+
+const suseSecretsTogglePath = "/etc/docker/suse-secrets-enable"
+
+// parseEnableFile parses a file that can only contain "0" or "1" (with some
+// whitespace).
+func parseEnableFile(path string) (bool, error) {
+ data, err := os.ReadFile(path)
+ if err != nil {
+ return false, err
+ }
+ data = bytes.TrimSpace(data)
+
+ switch value := string(data); value {
+ case "1":
+ return true, nil
+ case "0", "":
+ return false, nil
+ default:
+ return false, fmt.Errorf("invalid value %q (must be 0 to disable or 1 to enable)", value)
+ }
+}
+
+func isSuseSecretsEnabled() bool {
+ value, err := parseEnableFile(suseSecretsTogglePath)
+ if err != nil {
+ logrus.Warnf("SUSE:secrets :: error parsing %s: %v -- disabling SUSE secrets", suseSecretsTogglePath, err)
+ value = false
+ }
+ return value
+}
+
+var suseSecretsEnabled = true
+
+func init() {
+ // Make this entire feature toggle-able so that users can disable it if
+ // they run into issues like bsc#1231348.
+ suseSecretsEnabled = isSuseSecretsEnabled()
+ if suseSecretsEnabled {
+ logrus.Infof("SUSE:secrets :: SUSEConnect support enabled (set %s to 0 to disable)", suseSecretsTogglePath)
+ } else {
+ logrus.Infof("SUSE:secrets :: SUSEConnect support disabled by %s", suseSecretsTogglePath)
+ }
+}
+
+// Creating a fake file.
+type SuseFakeFile struct {
+ Path string
+ Uid int
+ Gid int
+ Mode os.FileMode
+ Data []byte
+}
+
+func (s SuseFakeFile) id() string {
+ // NOTE: It is _very_ important that this string always has a prefix of
+ // "suse". This is how we can ensure that we can operate on
+ // SecretReferences with a confidence that it was made by us.
+ return fmt.Sprintf("suse_%s_%s", digest.FromBytes(s.Data).Hex(), s.Path)
+}
+
+func (s SuseFakeFile) toSecret() *swarmapi.Secret {
+ return &swarmapi.Secret{
+ ID: s.id(),
+ Internal: true,
+ Spec: swarmapi.SecretSpec{
+ Data: s.Data,
+ },
+ }
+}
+
+func (s SuseFakeFile) toSecretReference(idMaps idtools.IdentityMapping) *swarmtypes.SecretReference {
+ // Figure out the host-facing {uid,gid} based on the provided maps. Fall
+ // back to root if the UID/GID don't match (we are guaranteed that root is
+ // mapped).
+ ctrUser := idtools.Identity{UID: s.Uid, GID: s.Gid}
+ hostUser := idMaps.RootPair()
+ if user, err := idMaps.ToHost(ctrUser); err == nil {
+ hostUser = user
+ }
+
+ // Return the secret reference as a file target.
+ return &swarmtypes.SecretReference{
+ SecretID: s.id(),
+ SecretName: s.id(),
+ File: &swarmtypes.SecretReferenceFileTarget{
+ Name: s.Path,
+ UID: fmt.Sprintf("%d", hostUser.UID),
+ GID: fmt.Sprintf("%d", hostUser.GID),
+ Mode: s.Mode,
+ },
+ }
+}
+
+// readDir will recurse into a directory prefix/dir, and return the set of
+// secrets in that directory (as a tar archive that is packed inside the "data"
+// field). The Path attribute of each has the prefix stripped. Symlinks are
+// dereferenced.
+func readDir(prefix, dir string) ([]*SuseFakeFile, error) {
+ var suseFiles []*SuseFakeFile
+
+ path := filepath.Join(prefix, dir)
+ fi, err := os.Stat(path)
+ if err != nil {
+ // Ignore missing files.
+ if os.IsNotExist(err) {
+ // If the path itself exists it was a dangling symlink so give a
+ // warning about the symlink dangling.
+ _, err2 := os.Lstat(path)
+ if !os.IsNotExist(err2) {
+ logrus.Warnf("SUSE:secrets :: ignoring dangling symlink: %s", path)
+ }
+ return nil, nil
+ }
+ return nil, err
+ } else if !fi.IsDir() {
+ // Just to be safe.
+ logrus.Infof("SUSE:secrets :: expected %q to be a directory, but was a file", path)
+ return readFile(prefix, dir)
+ }
+ path, err = filepath.EvalSymlinks(path)
+ if err != nil {
+ return nil, err
+ }
+
+ // Construct a tar archive of the source directory. We tar up the prefix
+ // directory and add dir as an IncludeFiles specifically so that we
+ // preserve the name of the directory itself.
+ tarStream, err := archive.TarWithOptions(path, &archive.TarOptions{
+ Compression: archive.Uncompressed,
+ IncludeSourceDir: true,
+ })
+ if err != nil {
+ return nil, fmt.Errorf("SUSE:secrets :: failed to tar source directory %q: %v", path, err)
+ }
+ tarStreamBytes, err := ioutil.ReadAll(tarStream)
+ if err != nil {
+ return nil, fmt.Errorf("SUSE:secrets :: failed to read full tar archive: %v", err)
+ }
+
+ // Get a list of the symlinks in the tar archive.
+ var symlinks []string
+ tmpTr := tar.NewReader(bytes.NewBuffer(tarStreamBytes))
+ for {
+ hdr, err := tmpTr.Next()
+ if err == io.EOF {
+ break
+ }
+ if err != nil {
+ return nil, fmt.Errorf("SUSE:secrets :: failed to read through tar reader: %v", err)
+ }
+ if hdr.Typeflag == tar.TypeSymlink {
+ symlinks = append(symlinks, hdr.Name)
+ }
+ }
+
+ // Symlinks aren't dereferenced in the above archive, so we explicitly do a
+ // rewrite of the tar archive to include all symlinks to files. We cannot
+ // do directories here, but lower-level directory symlinks aren't supported
+ // by zypper so this isn't an issue.
+ symlinkModifyMap := map[string]archive.TarModifierFunc{}
+ for _, sym := range symlinks {
+ logrus.Debugf("SUSE:secrets: archive(%q) %q is a need-to-rewrite symlink", path, sym)
+ symlinkModifyMap[sym] = func(tarPath string, hdr *tar.Header, r io.Reader) (*tar.Header, []byte, error) {
+ logrus.Debugf("SUSE:secrets: archive(%q) mapping for symlink %q", path, tarPath)
+ tarFullPath := filepath.Join(path, tarPath)
+
+ // Get a copy of the original byte stream.
+ oldContent, err := ioutil.ReadAll(r)
+ if err != nil {
+ return nil, nil, fmt.Errorf("suse_rewrite: failed to read archive entry %q: %v", tarPath, err)
+ }
+
+ // Check that the file actually exists.
+ fi, err := os.Stat(tarFullPath)
+ if err != nil {
+ logrus.Warnf("suse_rewrite: failed to stat archive entry %q: %v", tarFullPath, err)
+ return hdr, oldContent, nil
+ }
+
+ // Read the actual contents.
+ content, err := ioutil.ReadFile(tarFullPath)
+ if err != nil {
+ logrus.Warnf("suse_rewrite: failed to read %q: %v", tarFullPath, err)
+ return hdr, oldContent, nil
+ }
+
+ newHdr, err := tar.FileInfoHeader(fi, "")
+ if err != nil {
+ // Fake the header.
+ newHdr = &tar.Header{
+ Typeflag: tar.TypeReg,
+ Mode: 0644,
+ }
+ }
+
+ // Update the key fields.
+ hdr.Typeflag = newHdr.Typeflag
+ hdr.Mode = newHdr.Mode
+ hdr.Linkname = ""
+ return hdr, content, nil
+ }
+ }
+
+ // Create the rewritten tar stream.
+ tarStream = archive.ReplaceFileTarWrapper(ioutil.NopCloser(bytes.NewBuffer(tarStreamBytes)), symlinkModifyMap)
+ tarStreamBytes, err = ioutil.ReadAll(tarStream)
+ if err != nil {
+ return nil, fmt.Errorf("SUSE:secrets :: failed to read rewritten archive: %v", err)
+ }
+
+ // Add the tar stream as a "file".
+ suseFiles = append(suseFiles, &SuseFakeFile{
+ Path: dir,
+ Mode: fi.Mode(),
+ Data: tarStreamBytes,
+ })
+ return suseFiles, nil
+}
+
+// readFile returns a secret given a file under a given prefix.
+func readFile(prefix, file string) ([]*SuseFakeFile, error) {
+ path := filepath.Join(prefix, file)
+ fi, err := os.Stat(path)
+ if err != nil {
+ // Ignore missing files.
+ if os.IsNotExist(err) {
+ // If the path itself exists it was a dangling symlink so give a
+ // warning about the symlink dangling.
+ _, err2 := os.Lstat(path)
+ if !os.IsNotExist(err2) {
+ logrus.Warnf("SUSE:secrets :: ignoring dangling symlink: %s", path)
+ }
+ return nil, nil
+ }
+ return nil, err
+ } else if fi.IsDir() {
+ // Just to be safe.
+ logrus.Infof("SUSE:secrets :: expected %q to be a file, but was a directory", path)
+ return readDir(prefix, file)
+ }
+
+ var uid, gid int
+ if stat, ok := fi.Sys().(*syscall.Stat_t); ok {
+ uid, gid = int(stat.Uid), int(stat.Gid)
+ } else {
+ logrus.Warnf("SUSE:secrets :: failed to cast file stat_t: defaulting to owned by root:root: %s", path)
+ uid, gid = 0, 0
+ }
+
+ bytes, err := ioutil.ReadFile(path)
+ if err != nil {
+ return nil, err
+ }
+
+ var suseFiles []*SuseFakeFile
+ suseFiles = append(suseFiles, &SuseFakeFile{
+ Path: file,
+ Uid: uid,
+ Gid: gid,
+ Mode: fi.Mode(),
+ Data: bytes,
+ })
+ return suseFiles, nil
+}
+
+// getHostSuseSecretData returns the list of SuseFakeFiles the need to be added
+// as SUSE secrets.
+func getHostSuseSecretData() ([]*SuseFakeFile, error) {
+ secrets := []*SuseFakeFile{}
+
+ credentials, err := readDir("/etc/zypp", "credentials.d")
+ if err != nil {
+ if os.IsNotExist(err) {
+ credentials = []*SuseFakeFile{}
+ } else {
+ logrus.Errorf("SUSE:secrets :: error while reading zypp credentials: %s", err)
+ return nil, err
+ }
+ }
+ secrets = append(secrets, credentials...)
+
+ suseConnect, err := readFile("/etc", "SUSEConnect")
+ if err != nil {
+ if os.IsNotExist(err) {
+ suseConnect = []*SuseFakeFile{}
+ } else {
+ logrus.Errorf("SUSE:secrets :: error while reading /etc/SUSEConnect: %s", err)
+ return nil, err
+ }
+ }
+ secrets = append(secrets, suseConnect...)
+
+ return secrets, nil
+}
+
+// To fake an empty store, in the case where we are operating on a container
+// that was created pre-swarmkit. Otherwise segfaults and other fun things
+// happen. See bsc#1057743.
+type (
+ suseEmptyStore struct{}
+ suseEmptySecret struct{}
+ suseEmptyConfig struct{}
+ suseEmptyVolume struct{}
+)
+
+// In order to reduce the amount of code touched outside of this file, we
+// implement the swarm API for DependencyGetter. This asserts that this
+// requirement will always be matched. In addition, for the case of the *empty*
+// getters this reduces memory usage by having a global instance.
+var (
+ _ swarmexec.DependencyGetter = &suseDependencyStore{}
+ emptyStore swarmexec.DependencyGetter = suseEmptyStore{}
+ emptySecret swarmexec.SecretGetter = suseEmptySecret{}
+ emptyConfig swarmexec.ConfigGetter = suseEmptyConfig{}
+ emptyVolume swarmexec.VolumeGetter = suseEmptyVolume{}
+)
+
+var errSuseEmptyStore = fmt.Errorf("SUSE:secrets :: tried to get a resource from empty store [this is a bug]")
+
+func (_ suseEmptyConfig) Get(_ string) (*swarmapi.Config, error) { return nil, errSuseEmptyStore }
+func (_ suseEmptySecret) Get(_ string) (*swarmapi.Secret, error) { return nil, errSuseEmptyStore }
+func (_ suseEmptyVolume) Get(_ string) (string, error) { return "", errSuseEmptyStore }
+func (_ suseEmptyStore) Secrets() swarmexec.SecretGetter { return emptySecret }
+func (_ suseEmptyStore) Configs() swarmexec.ConfigGetter { return emptyConfig }
+func (_ suseEmptyStore) Volumes() swarmexec.VolumeGetter { return emptyVolume }
+
+type suseDependencyStore struct {
+ dfl swarmexec.DependencyGetter
+ secrets map[string]*swarmapi.Secret
+}
+
+// The following are effectively dumb wrappers that return ourselves, or the
+// default.
+func (s *suseDependencyStore) Secrets() swarmexec.SecretGetter { return s }
+func (s *suseDependencyStore) Volumes() swarmexec.VolumeGetter { return emptyVolume }
+func (s *suseDependencyStore) Configs() swarmexec.ConfigGetter { return s.dfl.Configs() }
+
+// Get overrides the underlying DependencyGetter with our own secrets (falling
+// through to the underlying DependencyGetter if the secret isn't present).
+func (s *suseDependencyStore) Get(id string) (*swarmapi.Secret, error) {
+ logrus.Debugf("SUSE:secrets :: id=%s requested from suseDependencyGetter", id)
+
+ secret, ok := s.secrets[id]
+ if !ok {
+ // fallthrough
+ return s.dfl.Secrets().Get(id)
+ }
+ return secret, nil
+}
+
+// removeSuseSecrets removes any SecretReferences which were added by us
+// explicitly (this is detected by checking that the prefix has a 'suse'
+// prefix). See bsc#1057743.
+func removeSuseSecrets(c *container.Container) {
+ var without []*swarmtypes.SecretReference
+ for _, secret := range c.SecretReferences {
+ if strings.HasPrefix(secret.SecretID, "suse") {
+ logrus.Debugf("SUSE:secrets :: removing 'old' suse secret %q from container %q", secret.SecretID, c.ID)
+ continue
+ }
+ without = append(without, secret)
+ }
+ c.SecretReferences = without
+}
+
+func (daemon *Daemon) injectSuseSecretStore(c *container.Container) error {
+ // We drop any "old" SUSE secrets, as it appears that old containers (when
+ // restarted) could still have references to old secrets. The .id() of all
+ // secrets have a prefix of "suse" so this is much easier. See bsc#1057743
+ // for details on why this could cause issues.
+ removeSuseSecrets(c)
+
+ // Don't inject anything if the administrator has disabled suse secrets.
+ // However, for previous existing containers we need to remove old secrets
+ // (see above), otherwise they will still have old secret data.
+ if !suseSecretsEnabled {
+ logrus.Debugf("SUSE:secrets :: skipping injection of secrets into container %q because of %s", c.ID, suseSecretsTogglePath)
+ return nil
+ }
+
+ newDependencyStore := &suseDependencyStore{
+ dfl: c.DependencyStore,
+ secrets: make(map[string]*swarmapi.Secret),
+ }
+ // Handle old containers. See bsc#1057743.
+ if newDependencyStore.dfl == nil {
+ newDependencyStore.dfl = emptyStore
+ }
+
+ secrets, err := getHostSuseSecretData()
+ if err != nil {
+ return err
+ }
+
+ idMaps := daemon.idMapping
+ for _, secret := range secrets {
+ newDependencyStore.secrets[secret.id()] = secret.toSecret()
+ c.SecretReferences = append(c.SecretReferences, secret.toSecretReference(idMaps))
+ }
+
+ c.DependencyStore = newDependencyStore
+
+ // bsc#1057743 -- In older versions of Docker we added volumes explicitly
+ // to the mount list. This causes clashes because of duplicate namespaces.
+ // If we see an existing mount that will clash with the in-built secrets
+ // mount we assume it's our fault.
+ intendedMounts, err := c.SecretMounts()
+ if err != nil {
+ logrus.Warnf("SUSE:secrets :: fetching old secret mounts: %v", err)
+ return err
+ }
+ for _, intendedMount := range intendedMounts {
+ mountPath := intendedMount.Destination
+ if volume, ok := c.MountPoints[mountPath]; ok {
+ logrus.Debugf("SUSE:secrets :: removing pre-existing %q mount: %#v", mountPath, volume)
+ delete(c.MountPoints, mountPath)
+ }
+ }
+ return nil
+}
--
2.48.1

View File

@@ -1,46 +0,0 @@
From 3f1bda82f345cc919a70cf747cc8c6f094c9451a Mon Sep 17 00:00:00 2001
From: Aleksa Sarai <asarai@suse.de>
Date: Mon, 22 May 2023 15:44:54 +1000
Subject: [PATCH 03/13] BUILD: SLE12: revert "graphdriver/btrfs: use kernel
UAPI headers"
This reverts commit 3208dcabdc8997340b255f5b880fef4e3f54580d.
On SLE 12, our UAPI headers are too old, resulting in us being unable to
build the btrfs driver with the new headers. This patch is only needed
for SLE-12.
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
---
daemon/graphdriver/btrfs/btrfs.go | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/daemon/graphdriver/btrfs/btrfs.go b/daemon/graphdriver/btrfs/btrfs.go
index d88efc4be2bb..4e976aa689cd 100644
--- a/daemon/graphdriver/btrfs/btrfs.go
+++ b/daemon/graphdriver/btrfs/btrfs.go
@@ -5,17 +5,12 @@ package btrfs // import "github.com/docker/docker/daemon/graphdriver/btrfs"
/*
#include <stdlib.h>
-#include <stdio.h>
#include <dirent.h>
-#include <linux/version.h>
-#if LINUX_VERSION_CODE < KERNEL_VERSION(4,12,0)
- #error "Headers from kernel >= 4.12 are required to build with Btrfs support."
- #error "HINT: Set 'DOCKER_BUILDTAGS=exclude_graphdriver_btrfs' to build without Btrfs."
-#endif
-
-#include <linux/btrfs.h>
-#include <linux/btrfs_tree.h>
+// keep struct field name compatible with btrfs-progs < 6.1.
+#define max_referenced max_rfer
+#include <btrfs/ioctl.h>
+#include <btrfs/ctree.h>
static void set_name_btrfs_ioctl_vol_args_v2(struct btrfs_ioctl_vol_args_v2* btrfs_struct, const char* value) {
snprintf(btrfs_struct->name, BTRFS_SUBVOL_NAME_MAX, "%s", value);
--
2.48.1

View File

@@ -1,89 +0,0 @@
From ba4df1cb80fa7956c148230193037a2b112a40a5 Mon Sep 17 00:00:00 2001
From: Aleksa Sarai <asarai@suse.de>
Date: Fri, 29 Jun 2018 17:59:30 +1000
Subject: [PATCH 04/13] bsc1073877: apparmor: clobber docker-default profile on
start
In the process of making docker-default reloading far less expensive,
567ef8e7858c ("daemon: switch to 'ensure' workflow for AppArmor
profiles") mistakenly made the initial profile load at dockerd start-up
lazy. As a result, if you have a running Docker daemon and upgrade it to
a new one with an updated AppArmor profile the new profile will not take
effect (because the old one is still loaded). The fix for this is quite
trivial, and just requires us to clobber the profile on start-up.
Fixes: 567ef8e7858c ("daemon: switch to 'ensure' workflow for AppArmor profiles")
SUSE-Bugs: bsc#1099277
Signed-off-by: Aleksa Sarai <asarai@suse.de>
---
daemon/apparmor_default.go | 14 ++++++++++----
daemon/apparmor_default_unsupported.go | 4 ++++
daemon/daemon.go | 5 +++--
3 files changed, 17 insertions(+), 6 deletions(-)
diff --git a/daemon/apparmor_default.go b/daemon/apparmor_default.go
index 6376001613f7..5fde21a4af8a 100644
--- a/daemon/apparmor_default.go
+++ b/daemon/apparmor_default.go
@@ -24,6 +24,15 @@ func DefaultApparmorProfile() string {
return ""
}
+func clobberDefaultAppArmorProfile() error {
+ if apparmor.HostSupports() {
+ if err := aaprofile.InstallDefault(defaultAppArmorProfile); err != nil {
+ return fmt.Errorf("AppArmor enabled on system but the %s profile could not be loaded: %s", defaultAppArmorProfile, err)
+ }
+ }
+ return nil
+}
+
func ensureDefaultAppArmorProfile() error {
if apparmor.HostSupports() {
loaded, err := aaprofile.IsLoaded(defaultAppArmorProfile)
@@ -37,10 +46,7 @@ func ensureDefaultAppArmorProfile() error {
}
// Load the profile.
- if err := aaprofile.InstallDefault(defaultAppArmorProfile); err != nil {
- return fmt.Errorf("AppArmor enabled on system but the %s profile could not be loaded: %s", defaultAppArmorProfile, err)
- }
+ return clobberDefaultAppArmorProfile()
}
-
return nil
}
diff --git a/daemon/apparmor_default_unsupported.go b/daemon/apparmor_default_unsupported.go
index e3dc18b32b5e..9c7723056268 100644
--- a/daemon/apparmor_default_unsupported.go
+++ b/daemon/apparmor_default_unsupported.go
@@ -3,6 +3,10 @@
package daemon // import "github.com/docker/docker/daemon"
+func clobberDefaultAppArmorProfile() error {
+ return nil
+}
+
func ensureDefaultAppArmorProfile() error {
return nil
}
diff --git a/daemon/daemon.go b/daemon/daemon.go
index 585d85086f8d..6e4c6ad1ac01 100644
--- a/daemon/daemon.go
+++ b/daemon/daemon.go
@@ -845,8 +845,9 @@ func NewDaemon(ctx context.Context, config *config.Config, pluginStore *plugin.S
logrus.Warnf("Failed to configure golang's threads limit: %v", err)
}
- // ensureDefaultAppArmorProfile does nothing if apparmor is disabled
- if err := ensureDefaultAppArmorProfile(); err != nil {
+ // Make sure we clobber any pre-existing docker-default profile to ensure
+ // that upgrades to the profile actually work smoothly.
+ if err := clobberDefaultAppArmorProfile(); err != nil {
logrus.Errorf(err.Error())
}
--
2.48.1

View File

@@ -1,241 +0,0 @@
From 0ca28257e81eed36ff840bff822ff7add3e2efa2 Mon Sep 17 00:00:00 2001
From: Aleksa Sarai <asarai@suse.de>
Date: Wed, 11 Oct 2023 21:19:12 +1100
Subject: [PATCH 05/13] SLE12: revert "apparmor: remove version-conditionals
from template"
This reverts the following commits:
* 7008a514493a ("profiles/apparmor: remove version-conditional constraints (< 2.8.96)")
* 2e19a4d56bf2 ("contrib/apparmor: remove version-conditionals (< 2.9) from template")
* d169a5730649 ("contrib/apparmor: remove remaining version-conditionals (< 2.9) from template")
* ecaab085db4b ("profiles/apparmor: remove use of aaparser.GetVersion()")
* e3e715666f95 ("pkg/aaparser: deprecate GetVersion, as it's no longer used")
These version conditionals are still required on SLE 12, where our
apparmor_parser version is quite old.
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
---
contrib/apparmor/main.go | 16 ++++++++++++++--
contrib/apparmor/template.go | 16 ++++++++++++++++
pkg/aaparser/aaparser.go | 2 --
profiles/apparmor/apparmor.go | 14 ++++++++++++--
profiles/apparmor/template.go | 4 ++++
5 files changed, 46 insertions(+), 6 deletions(-)
diff --git a/contrib/apparmor/main.go b/contrib/apparmor/main.go
index d67890d265de..f4a2978b86cb 100644
--- a/contrib/apparmor/main.go
+++ b/contrib/apparmor/main.go
@@ -6,9 +6,13 @@ import (
"os"
"path"
"text/template"
+
+ "github.com/docker/docker/pkg/aaparser"
)
-type profileData struct{}
+type profileData struct {
+ Version int
+}
func main() {
if len(os.Args) < 2 {
@@ -18,6 +22,15 @@ func main() {
// parse the arg
apparmorProfilePath := os.Args[1]
+ version, err := aaparser.GetVersion()
+ if err != nil {
+ log.Fatal(err)
+ }
+ data := profileData{
+ Version: version,
+ }
+ fmt.Printf("apparmor_parser is of version %+v\n", data)
+
// parse the template
compiled, err := template.New("apparmor_profile").Parse(dockerProfileTemplate)
if err != nil {
@@ -35,7 +48,6 @@ func main() {
}
defer f.Close()
- data := profileData{}
if err := compiled.Execute(f, data); err != nil {
log.Fatalf("executing template failed: %v", err)
}
diff --git a/contrib/apparmor/template.go b/contrib/apparmor/template.go
index 58afcbe845ee..e6d0b6d37c58 100644
--- a/contrib/apparmor/template.go
+++ b/contrib/apparmor/template.go
@@ -20,9 +20,11 @@ profile /usr/bin/docker (attach_disconnected, complain) {
umount,
pivot_root,
+{{if ge .Version 209000}}
signal (receive) peer=@{profile_name},
signal (receive) peer=unconfined,
signal (send),
+{{end}}
network,
capability,
owner /** rw,
@@ -45,10 +47,12 @@ profile /usr/bin/docker (attach_disconnected, complain) {
/etc/ld.so.cache r,
/etc/passwd r,
+{{if ge .Version 209000}}
ptrace peer=@{profile_name},
ptrace (read) peer=docker-default,
deny ptrace (trace) peer=docker-default,
deny ptrace peer=/usr/bin/docker///bin/ps,
+{{end}}
/usr/lib/** rm,
/lib/** rm,
@@ -69,9 +73,11 @@ profile /usr/bin/docker (attach_disconnected, complain) {
/sbin/zfs rCx,
/sbin/apparmor_parser rCx,
+{{if ge .Version 209000}}
# Transitions
change_profile -> docker-*,
change_profile -> unconfined,
+{{end}}
profile /bin/cat (complain) {
/etc/ld.so.cache r,
@@ -93,8 +99,10 @@ profile /usr/bin/docker (attach_disconnected, complain) {
/dev/null rw,
/bin/ps mr,
+{{if ge .Version 209000}}
# We don't need ptrace so we'll deny and ignore the error.
deny ptrace (read, trace),
+{{end}}
# Quiet dac_override denials
deny capability dac_override,
@@ -112,11 +120,15 @@ profile /usr/bin/docker (attach_disconnected, complain) {
/proc/tty/drivers r,
}
profile /sbin/iptables (complain) {
+{{if ge .Version 209000}}
signal (receive) peer=/usr/bin/docker,
+{{end}}
capability net_admin,
}
profile /sbin/auplink flags=(attach_disconnected, complain) {
+{{if ge .Version 209000}}
signal (receive) peer=/usr/bin/docker,
+{{end}}
capability sys_admin,
capability dac_override,
@@ -135,7 +147,9 @@ profile /usr/bin/docker (attach_disconnected, complain) {
/proc/[0-9]*/mounts rw,
}
profile /sbin/modprobe /bin/kmod (complain) {
+{{if ge .Version 209000}}
signal (receive) peer=/usr/bin/docker,
+{{end}}
capability sys_module,
/etc/ld.so.cache r,
/lib/** rm,
@@ -149,7 +163,9 @@ profile /usr/bin/docker (attach_disconnected, complain) {
}
# xz works via pipes, so we do not need access to the filesystem.
profile /usr/bin/xz (complain) {
+{{if ge .Version 209000}}
signal (receive) peer=/usr/bin/docker,
+{{end}}
/etc/ld.so.cache r,
/lib/** rm,
/usr/bin/xz rm,
diff --git a/pkg/aaparser/aaparser.go b/pkg/aaparser/aaparser.go
index 3d7c2c5a97b3..2b5a2605f9c1 100644
--- a/pkg/aaparser/aaparser.go
+++ b/pkg/aaparser/aaparser.go
@@ -13,8 +13,6 @@ const (
)
// GetVersion returns the major and minor version of apparmor_parser.
-//
-// Deprecated: no longer used, and will be removed in the next release.
func GetVersion() (int, error) {
output, err := cmd("", "--version")
if err != nil {
diff --git a/profiles/apparmor/apparmor.go b/profiles/apparmor/apparmor.go
index d0f236160506..b3566b2f7354 100644
--- a/profiles/apparmor/apparmor.go
+++ b/profiles/apparmor/apparmor.go
@@ -14,8 +14,10 @@ import (
"github.com/docker/docker/pkg/aaparser"
)
-// profileDirectory is the file store for apparmor profiles and macros.
-const profileDirectory = "/etc/apparmor.d"
+var (
+ // profileDirectory is the file store for apparmor profiles and macros.
+ profileDirectory = "/etc/apparmor.d"
+)
// profileData holds information about the given profile for generation.
type profileData struct {
@@ -27,6 +29,8 @@ type profileData struct {
Imports []string
// InnerImports defines the apparmor functions to import in the profile.
InnerImports []string
+ // Version is the {major, minor, patch} version of apparmor_parser as a single number.
+ Version int
}
// generateDefault creates an apparmor profile from ProfileData.
@@ -46,6 +50,12 @@ func (p *profileData) generateDefault(out io.Writer) error {
p.InnerImports = append(p.InnerImports, "#include <abstractions/base>")
}
+ ver, err := aaparser.GetVersion()
+ if err != nil {
+ return err
+ }
+ p.Version = ver
+
return compiled.Execute(out, p)
}
diff --git a/profiles/apparmor/template.go b/profiles/apparmor/template.go
index 9f207e2014a8..626e5f6789a3 100644
--- a/profiles/apparmor/template.go
+++ b/profiles/apparmor/template.go
@@ -24,12 +24,14 @@ profile {{.Name}} flags=(attach_disconnected,mediate_deleted) {
capability,
file,
umount,
+{{if ge .Version 208096}}
# Host (privileged) processes may send signals to container processes.
signal (receive) peer=unconfined,
# dockerd may send signals to container processes (for "docker kill").
signal (receive) peer={{.DaemonProfile}},
# Container processes may send signals amongst themselves.
signal (send,receive) peer={{.Name}},
+{{end}}
deny @{PROC}/* w, # deny write for all files directly in /proc (not in a subdir)
# deny write to files not in /proc/<number>/** or /proc/sys/**
@@ -50,7 +52,9 @@ profile {{.Name}} flags=(attach_disconnected,mediate_deleted) {
deny /sys/devices/virtual/powercap/** rwklx,
deny /sys/kernel/security/** rwklx,
+{{if ge .Version 208095}}
# suppress ptrace denials when using 'docker ps' or using 'ps' inside a container
ptrace (trace,read,tracedby,readby) peer={{.Name}},
+{{end}}
}
`
--
2.48.1

File diff suppressed because it is too large Load Diff

View File

@@ -1,898 +0,0 @@
From b760758157cd0d00f46f37f86a9cbee7810cb666 Mon Sep 17 00:00:00 2001
From: Aleksa Sarai <cyphar@cyphar.com>
Date: Thu, 2 May 2024 22:50:23 +1000
Subject: [PATCH 07/13] bsc1221916: update to patched buildkit version to fix
symlink resolution
SUSE-Bugs: https://bugzilla.suse.com/show_bug.cgi?id=1221916
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
---
builder/builder-next/worker/worker.go | 2 +-
vendor.mod | 2 +-
vendor.sum | 4 +-
.../buildkit/cache/contenthash/checksum.go | 393 ++++++++++--------
.../moby/buildkit/cache/contenthash/path.go | 161 +++----
vendor/modules.txt | 4 +-
6 files changed, 314 insertions(+), 252 deletions(-)
diff --git a/builder/builder-next/worker/worker.go b/builder/builder-next/worker/worker.go
index 64d7b9131b16..7b40ac63ce7f 100644
--- a/builder/builder-next/worker/worker.go
+++ b/builder/builder-next/worker/worker.go
@@ -50,7 +50,7 @@ import (
)
func init() {
- version.Version = "v0.11.7+cd804dd86389"
+ version.Version = "v0.11.7+6b814972ef19"
}
const labelCreatedAt = "buildkit/createdat"
diff --git a/vendor.mod b/vendor.mod
index 2eb13746cacd..021d62b21d19 100644
--- a/vendor.mod
+++ b/vendor.mod
@@ -99,7 +99,7 @@ require (
)
// github.com/SUSE/buildkit suse-stable-v24.0.9
-replace github.com/moby/buildkit => github.com/SUSE/buildkit v0.0.0-20241218053907-cd804dd86389
+replace github.com/moby/buildkit => github.com/SUSE/buildkit v0.0.0-20241218053911-6b814972ef19
require (
cloud.google.com/go v0.102.1 // indirect
diff --git a/vendor.sum b/vendor.sum
index 716245c80413..4bdbbeb3f073 100644
--- a/vendor.sum
+++ b/vendor.sum
@@ -141,8 +141,8 @@ github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdko
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/RackSec/srslog v0.0.0-20180709174129-a4725f04ec91 h1:vX+gnvBc56EbWYrmlhYbFYRaeikAke1GL84N4BEYOFE=
github.com/RackSec/srslog v0.0.0-20180709174129-a4725f04ec91/go.mod h1:cDLGBht23g0XQdLjzn6xOGXDkLK182YfINAaZEQLCHQ=
-github.com/SUSE/buildkit v0.0.0-20241218053907-cd804dd86389 h1:EKne0CAOXpf1QuZ3+jj7PTpOtSn+q1Yz5H6pAwrOktY=
-github.com/SUSE/buildkit v0.0.0-20241218053907-cd804dd86389/go.mod h1:bMQDryngJKGvJ/ZuRFhrejurbvYSv3NkGCheQ59X4AM=
+github.com/SUSE/buildkit v0.0.0-20241218053911-6b814972ef19 h1:3gfqJcXxLASvlAfgd+TFPrrhNrM+O26HplOhi3BNT+A=
+github.com/SUSE/buildkit v0.0.0-20241218053911-6b814972ef19/go.mod h1:bMQDryngJKGvJ/ZuRFhrejurbvYSv3NkGCheQ59X4AM=
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7lmo=
github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
diff --git a/vendor/github.com/moby/buildkit/cache/contenthash/checksum.go b/vendor/github.com/moby/buildkit/cache/contenthash/checksum.go
index dcf424a6b4fc..13a74be24c4e 100644
--- a/vendor/github.com/moby/buildkit/cache/contenthash/checksum.go
+++ b/vendor/github.com/moby/buildkit/cache/contenthash/checksum.go
@@ -10,6 +10,7 @@ import (
"path/filepath"
"strings"
"sync"
+ "sync/atomic"
iradix "github.com/hashicorp/go-immutable-radix"
"github.com/hashicorp/golang-lru/simplelru"
@@ -288,7 +289,7 @@ func keyPath(p string) string {
// HandleChange notifies the source about a modification operation
func (cc *cacheContext) HandleChange(kind fsutil.ChangeKind, p string, fi os.FileInfo, err error) (retErr error) {
p = keyPath(p)
- k := convertPathToKey([]byte(p))
+ k := convertPathToKey(p)
deleteDir := func(cr *CacheRecord) {
if cr.Type == CacheRecordTypeDir {
@@ -367,7 +368,7 @@ func (cc *cacheContext) HandleChange(kind fsutil.ChangeKind, p string, fi os.Fil
// note that the source may be called later because data writing is async
if fi.Mode()&os.ModeSymlink == 0 && stat.Linkname != "" {
ln := path.Join("/", filepath.ToSlash(stat.Linkname))
- v, ok := cc.txn.Get(convertPathToKey([]byte(ln)))
+ v, ok := cc.txn.Get(convertPathToKey(ln))
if ok {
cp := *v.(*CacheRecord)
cr = &cp
@@ -405,7 +406,7 @@ func (cc *cacheContext) Checksum(ctx context.Context, mountable cache.Mountable,
defer m.clean()
if !opts.Wildcard && len(opts.IncludePatterns) == 0 && len(opts.ExcludePatterns) == 0 {
- return cc.checksumFollow(ctx, m, p, opts.FollowLinks)
+ return cc.lazyChecksum(ctx, m, p, opts.FollowLinks)
}
includedPaths, err := cc.includedPaths(ctx, m, p, opts)
@@ -416,7 +417,7 @@ func (cc *cacheContext) Checksum(ctx context.Context, mountable cache.Mountable,
if opts.FollowLinks {
for i, w := range includedPaths {
if w.record.Type == CacheRecordTypeSymlink {
- dgst, err := cc.checksumFollow(ctx, m, w.path, opts.FollowLinks)
+ dgst, err := cc.lazyChecksum(ctx, m, w.path, opts.FollowLinks)
if err != nil {
return "", err
}
@@ -443,30 +444,6 @@ func (cc *cacheContext) Checksum(ctx context.Context, mountable cache.Mountable,
return digester.Digest(), nil
}
-func (cc *cacheContext) checksumFollow(ctx context.Context, m *mount, p string, follow bool) (digest.Digest, error) {
- const maxSymlinkLimit = 255
- i := 0
- for {
- if i > maxSymlinkLimit {
- return "", errors.Errorf("too many symlinks: %s", p)
- }
- cr, err := cc.checksumNoFollow(ctx, m, p)
- if err != nil {
- return "", err
- }
- if cr.Type == CacheRecordTypeSymlink && follow {
- link := cr.Linkname
- if !path.IsAbs(cr.Linkname) {
- link = path.Join(path.Dir(p), link)
- }
- i++
- p = link
- } else {
- return cr.Digest, nil
- }
- }
-}
-
func (cc *cacheContext) includedPaths(ctx context.Context, m *mount, p string, opts ChecksumOpts) ([]*includedPath, error) {
cc.mu.Lock()
defer cc.mu.Unlock()
@@ -476,12 +453,12 @@ func (cc *cacheContext) includedPaths(ctx context.Context, m *mount, p string, o
}
root := cc.tree.Root()
- scan, err := cc.needsScan(root, "")
+ scan, err := cc.needsScan(root, "", false)
if err != nil {
return nil, err
}
if scan {
- if err := cc.scanPath(ctx, m, ""); err != nil {
+ if err := cc.scanPath(ctx, m, "", false); err != nil {
return nil, err
}
}
@@ -534,13 +511,13 @@ func (cc *cacheContext) includedPaths(ctx context.Context, m *mount, p string, o
}
} else {
origPrefix = p
- k = convertPathToKey([]byte(origPrefix))
+ k = convertPathToKey(origPrefix)
// We need to resolve symlinks here, in case the base path
// involves a symlink. That will match fsutil behavior of
// calling functions such as stat and walk.
var cr *CacheRecord
- k, cr, err = getFollowLinks(root, k, true)
+ k, cr, err = getFollowLinks(root, k, false)
if err != nil {
return nil, err
}
@@ -552,7 +529,7 @@ func (cc *cacheContext) includedPaths(ctx context.Context, m *mount, p string, o
iter.SeekLowerBound(append(append([]byte{}, k...), 0))
}
- resolvedPrefix = string(convertKeyToPath(k))
+ resolvedPrefix = convertKeyToPath(k)
} else {
k, _, keyOk = iter.Next()
}
@@ -563,7 +540,7 @@ func (cc *cacheContext) includedPaths(ctx context.Context, m *mount, p string, o
)
for keyOk {
- fn := string(convertKeyToPath(k))
+ fn := convertKeyToPath(k)
// Convert the path prefix from what we found in the prefix
// tree to what the argument specified.
@@ -749,36 +726,12 @@ func wildcardPrefix(root *iradix.Node, p string) (string, []byte, bool, error) {
return "", nil, false, nil
}
- linksWalked := 0
- k, cr, err := getFollowLinksWalk(root, convertPathToKey([]byte(d1)), true, &linksWalked)
+ // Only resolve the final symlink component if there are components in the
+ // wildcard segment.
+ k, cr, err := getFollowLinks(root, convertPathToKey(d1), d2 != "")
if err != nil {
return "", k, false, err
}
-
- if d2 != "" && cr != nil && cr.Type == CacheRecordTypeSymlink {
- // getFollowLinks only handles symlinks in path
- // components before the last component, so
- // handle last component in d1 specially.
- resolved := string(convertKeyToPath(k))
- for {
- v, ok := root.Get(k)
-
- if !ok {
- return d1, k, false, nil
- }
- if v.(*CacheRecord).Type != CacheRecordTypeSymlink {
- break
- }
-
- linksWalked++
- if linksWalked > 255 {
- return "", k, false, errors.Errorf("too many links")
- }
-
- resolved := cleanLink(resolved, v.(*CacheRecord).Linkname)
- k = convertPathToKey([]byte(resolved))
- }
- }
return d1, k, cr != nil, nil
}
@@ -814,19 +767,22 @@ func containsWildcards(name string) bool {
return false
}
-func (cc *cacheContext) checksumNoFollow(ctx context.Context, m *mount, p string) (*CacheRecord, error) {
+func (cc *cacheContext) lazyChecksum(ctx context.Context, m *mount, p string, followTrailing bool) (digest.Digest, error) {
p = keyPath(p)
+ k := convertPathToKey(p)
+ // Try to look up the path directly without doing a scan.
cc.mu.RLock()
if cc.txn == nil {
root := cc.tree.Root()
cc.mu.RUnlock()
- v, ok := root.Get(convertPathToKey([]byte(p)))
- if ok {
- cr := v.(*CacheRecord)
- if cr.Digest != "" {
- return cr, nil
- }
+
+ _, cr, err := getFollowLinks(root, k, followTrailing)
+ if err != nil {
+ return "", err
+ }
+ if cr != nil && cr.Digest != "" {
+ return cr.Digest, nil
}
} else {
cc.mu.RUnlock()
@@ -846,7 +802,11 @@ func (cc *cacheContext) checksumNoFollow(ctx context.Context, m *mount, p string
}
}()
- return cc.lazyChecksum(ctx, m, p)
+ cr, err := cc.scanChecksum(ctx, m, p, followTrailing)
+ if err != nil {
+ return "", err
+ }
+ return cr.Digest, nil
}
func (cc *cacheContext) commitActiveTransaction() {
@@ -854,7 +814,7 @@ func (cc *cacheContext) commitActiveTransaction() {
addParentToMap(d, cc.dirtyMap)
}
for d := range cc.dirtyMap {
- k := convertPathToKey([]byte(d))
+ k := convertPathToKey(d)
if _, ok := cc.txn.Get(k); ok {
cc.txn.Insert(k, &CacheRecord{Type: CacheRecordTypeDir})
}
@@ -865,21 +825,21 @@ func (cc *cacheContext) commitActiveTransaction() {
cc.txn = nil
}
-func (cc *cacheContext) lazyChecksum(ctx context.Context, m *mount, p string) (*CacheRecord, error) {
+func (cc *cacheContext) scanChecksum(ctx context.Context, m *mount, p string, followTrailing bool) (*CacheRecord, error) {
root := cc.tree.Root()
- scan, err := cc.needsScan(root, p)
+ scan, err := cc.needsScan(root, p, followTrailing)
if err != nil {
return nil, err
}
if scan {
- if err := cc.scanPath(ctx, m, p); err != nil {
+ if err := cc.scanPath(ctx, m, p, followTrailing); err != nil {
return nil, err
}
}
- k := convertPathToKey([]byte(p))
+ k := convertPathToKey(p)
txn := cc.tree.Txn()
root = txn.Root()
- cr, updated, err := cc.checksum(ctx, root, txn, m, k, true)
+ cr, updated, err := cc.checksum(ctx, root, txn, m, k, followTrailing)
if err != nil {
return nil, err
}
@@ -888,9 +848,9 @@ func (cc *cacheContext) lazyChecksum(ctx context.Context, m *mount, p string) (*
return cr, err
}
-func (cc *cacheContext) checksum(ctx context.Context, root *iradix.Node, txn *iradix.Txn, m *mount, k []byte, follow bool) (*CacheRecord, bool, error) {
+func (cc *cacheContext) checksum(ctx context.Context, root *iradix.Node, txn *iradix.Txn, m *mount, k []byte, followTrailing bool) (*CacheRecord, bool, error) {
origk := k
- k, cr, err := getFollowLinks(root, k, follow)
+ k, cr, err := getFollowLinks(root, k, followTrailing)
if err != nil {
return nil, false, err
}
@@ -916,7 +876,9 @@ func (cc *cacheContext) checksum(ctx context.Context, root *iradix.Node, txn *ir
}
h.Write(bytes.TrimPrefix(subk, k))
- subcr, _, err := cc.checksum(ctx, root, txn, m, subk, true)
+ // We do not follow trailing links when checksumming a directory's
+ // contents.
+ subcr, _, err := cc.checksum(ctx, root, txn, m, subk, false)
if err != nil {
return nil, false, err
}
@@ -933,7 +895,7 @@ func (cc *cacheContext) checksum(ctx context.Context, root *iradix.Node, txn *ir
dgst = digest.NewDigest(digest.SHA256, h)
default:
- p := string(convertKeyToPath(bytes.TrimSuffix(k, []byte{0})))
+ p := convertKeyToPath(bytes.TrimSuffix(k, []byte{0}))
target, err := m.mount(ctx)
if err != nil {
@@ -965,42 +927,82 @@ func (cc *cacheContext) checksum(ctx context.Context, root *iradix.Node, txn *ir
return cr2, true, nil
}
-// needsScan returns false if path is in the tree or a parent path is in tree
-// and subpath is missing
-func (cc *cacheContext) needsScan(root *iradix.Node, p string) (bool, error) {
- var linksWalked int
- return cc.needsScanFollow(root, p, &linksWalked)
+// pathSet is a set of path prefixes that can be used to see if a given path is
+// lexically a child of any path in the set. All paths provided to this set
+// MUST be absolute and use / as the separator.
+type pathSet struct {
+ // prefixes contains paths of the form "/a/b/", so that we correctly detect
+ // /a/b as being a parent of /a/b/c but not /a/bc.
+ prefixes []string
}
-func (cc *cacheContext) needsScanFollow(root *iradix.Node, p string, linksWalked *int) (bool, error) {
- if p == "/" {
- p = ""
- }
- v, ok := root.Get(convertPathToKey([]byte(p)))
- if !ok {
- if p == "" {
- return true, nil
+// add a path to the set. This is a no-op if includes(path) == true.
+func (s *pathSet) add(p string) {
+ // Ensure the path is absolute and clean.
+ p = path.Join("/", p)
+ if !s.includes(p) {
+ if p != "/" {
+ p += "/"
}
- return cc.needsScanFollow(root, path.Clean(path.Dir(p)), linksWalked)
+ s.prefixes = append(s.prefixes, p)
+ }
+}
+
+// includes returns true iff there is a path in the pathSet which is a lexical
+// parent of the given path. The provided path MUST be an absolute path and
+// MUST NOT contain any ".." components, as they will be path.Clean'd.
+func (s pathSet) includes(p string) bool {
+ // Ensure the path is absolute and clean.
+ p = path.Join("/", p)
+ if p != "/" {
+ p += "/"
}
- cr := v.(*CacheRecord)
- if cr.Type == CacheRecordTypeSymlink {
- if *linksWalked > 255 {
- return false, errTooManyLinks
+ for _, prefix := range s.prefixes {
+ if strings.HasPrefix(p, prefix) {
+ return true
}
- *linksWalked++
- link := path.Clean(cr.Linkname)
- if !path.IsAbs(cr.Linkname) {
- link = path.Join("/", path.Dir(p), link)
+ }
+ return false
+}
+
+// needsScan returns false if path is in the tree or a parent path is in tree
+// and subpath is missing.
+func (cc *cacheContext) needsScan(root *iradix.Node, path string, followTrailing bool) (bool, error) {
+ var (
+ goodPaths pathSet
+ hasParentInTree bool
+ )
+ k := convertPathToKey(path)
+ _, cr, err := getFollowLinksCallback(root, k, followTrailing, func(subpath string, cr *CacheRecord) error {
+ // If we found a path that exists in the cache, add it to the set of
+ // known-scanned paths. Otherwise, verify whether the not-found subpath
+ // is inside a known-scanned path (we might have hit a "..", taking us
+ // out of the scanned paths, or we might hit a non-existent path inside
+ // a scanned path). getFollowLinksCallback iterates left-to-right, so
+ // we will always hit ancestors first.
+ if cr != nil {
+ hasParentInTree = cr.Type != CacheRecordTypeSymlink
+ goodPaths.add(subpath)
+ } else {
+ hasParentInTree = goodPaths.includes(subpath)
}
- return cc.needsScanFollow(root, link, linksWalked)
+ return nil
+ })
+ if err != nil {
+ return false, err
}
- return false, nil
+ return cr == nil && !hasParentInTree, nil
}
-func (cc *cacheContext) scanPath(ctx context.Context, m *mount, p string) (retErr error) {
+// Only used by TestNeedScanChecksumRegression to make sure scanPath is not
+// called for paths we have already scanned.
+var (
+ scanCounterEnable bool
+ scanCounter atomic.Uint64
+)
+
+func (cc *cacheContext) scanPath(ctx context.Context, m *mount, p string, followTrailing bool) (retErr error) {
p = path.Join("/", p)
- d, _ := path.Split(p)
mp, err := m.mount(ctx)
if err != nil {
@@ -1010,33 +1012,42 @@ func (cc *cacheContext) scanPath(ctx context.Context, m *mount, p string) (retEr
n := cc.tree.Root()
txn := cc.tree.Txn()
- parentPath, err := rootPath(mp, filepath.FromSlash(d), func(p, link string) error {
+ resolvedPath, err := rootPath(mp, filepath.FromSlash(p), followTrailing, func(p, link string) error {
cr := &CacheRecord{
Type: CacheRecordTypeSymlink,
Linkname: filepath.ToSlash(link),
}
- k := []byte(path.Join("/", filepath.ToSlash(p)))
- k = convertPathToKey(k)
- txn.Insert(k, cr)
+ p = path.Join("/", filepath.ToSlash(p))
+ txn.Insert(convertPathToKey(p), cr)
return nil
})
if err != nil {
return err
}
- err = filepath.Walk(parentPath, func(itemPath string, fi os.FileInfo, err error) error {
+ // Scan the parent directory of the path we resolved, unless we're at the
+ // root (in which case we scan the root).
+ scanPath := filepath.Dir(resolvedPath)
+ if !strings.HasPrefix(filepath.ToSlash(scanPath)+"/", filepath.ToSlash(mp)+"/") {
+ scanPath = resolvedPath
+ }
+
+ err = filepath.Walk(scanPath, func(itemPath string, fi os.FileInfo, err error) error {
+ if scanCounterEnable {
+ scanCounter.Add(1)
+ }
if err != nil {
+ // If the root doesn't exist, ignore the error.
+ if itemPath == scanPath && errors.Is(err, os.ErrNotExist) {
+ return nil
+ }
return errors.Wrapf(err, "failed to walk %s", itemPath)
}
rel, err := filepath.Rel(mp, itemPath)
if err != nil {
return err
}
- k := []byte(path.Join("/", filepath.ToSlash(rel)))
- if string(k) == "/" {
- k = []byte{}
- }
- k = convertPathToKey(k)
+ k := convertPathToKey(keyPath(rel))
if _, ok := n.Get(k); !ok {
cr := &CacheRecord{
Type: CacheRecordTypeFile,
@@ -1069,55 +1080,118 @@ func (cc *cacheContext) scanPath(ctx context.Context, m *mount, p string) (retEr
return nil
}
-func getFollowLinks(root *iradix.Node, k []byte, follow bool) ([]byte, *CacheRecord, error) {
- var linksWalked int
- return getFollowLinksWalk(root, k, follow, &linksWalked)
+// followLinksCallback is called after we try to resolve each element. If the
+// path was not found, cr is nil.
+type followLinksCallback func(path string, cr *CacheRecord) error
+
+// getFollowLinks is shorthand for getFollowLinksCallback(..., nil).
+func getFollowLinks(root *iradix.Node, k []byte, followTrailing bool) ([]byte, *CacheRecord, error) {
+ return getFollowLinksCallback(root, k, followTrailing, nil)
}
-func getFollowLinksWalk(root *iradix.Node, k []byte, follow bool, linksWalked *int) ([]byte, *CacheRecord, error) {
+// getFollowLinksCallback looks up the requested key, fully resolving any
+// symlink components encountered. The implementation is heavily based on
+// <https://github.com/cyphar/filepath-securejoin>.
+//
+// followTrailing indicates whether the *final component* of the path should be
+// resolved (effectively O_PATH|O_NOFOLLOW). Note that (in contrast to some
+// Linux APIs), followTrailing is obeyed even if the key has a trailing slash
+// (though paths like "foo/link/." will cause the link to be resolved).
+//
+// cb is a callback that is called for each path component encountered during
+// path resolution (after the path component is looked up in the cache). This
+// means for a path like /a/b/c, the callback will be called for at least
+//
+// {/, /a, /a/b, /a/b/c}
+//
+// Note that if any of the components are symlinks, the paths will depend on
+// the symlink contents and there will be more callbacks. If the requested key
+// has a trailing slash, the callback will also be called for the final
+// trailing-slash lookup (/a/b/c/ in the above example). Note that
+// getFollowLinksCallback will try to look up the original key directly first
+// and the callback is not called for this first lookup.
+func getFollowLinksCallback(root *iradix.Node, k []byte, followTrailing bool, cb followLinksCallback) ([]byte, *CacheRecord, error) {
v, ok := root.Get(k)
- if ok {
+ if ok && (!followTrailing || v.(*CacheRecord).Type != CacheRecordTypeSymlink) {
return k, v.(*CacheRecord), nil
}
- if !follow || len(k) == 0 {
+ if len(k) == 0 {
return k, nil, nil
}
- dir, file := splitKey(k)
+ var (
+ currentPath = "/"
+ remainingPath = convertKeyToPath(k)
+ linksWalked int
+ cr *CacheRecord
+ )
+ // Trailing slashes are significant for the cache, but path.Clean strips
+ // them. We only care about the slash for the final lookup.
+ remainingPath, hadTrailingSlash := strings.CutSuffix(remainingPath, "/")
+ for remainingPath != "" {
+ // Get next component.
+ var part string
+ if i := strings.IndexRune(remainingPath, '/'); i == -1 {
+ part, remainingPath = remainingPath, ""
+ } else {
+ part, remainingPath = remainingPath[:i], remainingPath[i+1:]
+ }
- k, parent, err := getFollowLinksWalk(root, dir, follow, linksWalked)
- if err != nil {
- return nil, nil, err
- }
- if parent != nil {
- if parent.Type == CacheRecordTypeSymlink {
- *linksWalked++
- if *linksWalked > 255 {
- return nil, nil, errors.Errorf("too many links")
+ // Apply the component to the path. Since it is a single component, and
+ // our current path contains no symlinks, we can just apply it
+ // leixically.
+ nextPath := keyPath(path.Join("/", currentPath, part))
+ // In contrast to rootPath, we don't skip lookups for no-op components
+ // or / because we need to call the callback for every path component
+ // we hit (including /) and we need to make sure that the CacheRecord
+ // we return is correct after every iteration.
+
+ cr = nil
+ v, ok := root.Get(convertPathToKey(nextPath))
+ if ok {
+ cr = v.(*CacheRecord)
+ }
+ if cb != nil {
+ if err := cb(nextPath, cr); err != nil {
+ return nil, nil, err
}
+ }
+ if !ok || cr.Type != CacheRecordTypeSymlink {
+ currentPath = nextPath
+ continue
+ }
+ if !followTrailing && remainingPath == "" {
+ currentPath = nextPath
+ break
+ }
- link := cleanLink(string(convertKeyToPath(dir)), parent.Linkname)
- return getFollowLinksWalk(root, append(convertPathToKey([]byte(link)), file...), follow, linksWalked)
+ linksWalked++
+ if linksWalked > maxSymlinkLimit {
+ return nil, nil, errTooManyLinks
}
- }
- k = append(k, file...)
- v, ok = root.Get(k)
- if ok {
- return k, v.(*CacheRecord), nil
- }
- return k, nil, nil
-}
-func cleanLink(dir, linkname string) string {
- dirPath := path.Clean(dir)
- if dirPath == "." || dirPath == "/" {
- dirPath = ""
+ remainingPath = cr.Linkname + "/" + remainingPath
+ if path.IsAbs(cr.Linkname) {
+ currentPath = "/"
+ }
}
- link := path.Clean(linkname)
- if !path.IsAbs(link) {
- return path.Join("/", path.Join(path.Dir(dirPath), link))
+ // We've already looked up the final component. However, if there was a
+ // trailing slash in the original path, we need to do the lookup again with
+ // the slash applied.
+ if hadTrailingSlash {
+ cr = nil
+ currentPath += "/"
+ v, ok := root.Get(convertPathToKey(currentPath))
+ if ok {
+ cr = v.(*CacheRecord)
+ }
+ if cb != nil {
+ if err := cb(currentPath, cr); err != nil {
+ return nil, nil, err
+ }
+ }
}
- return link
+ return convertPathToKey(currentPath), cr, nil
}
func prepareDigest(fp, p string, fi os.FileInfo) (digest.Digest, error) {
@@ -1174,25 +1248,10 @@ func poolsCopy(dst io.Writer, src io.Reader) (written int64, err error) {
return
}
-func convertPathToKey(p []byte) []byte {
+func convertPathToKey(p string) []byte {
return bytes.Replace([]byte(p), []byte("/"), []byte{0}, -1)
}
-func convertKeyToPath(p []byte) []byte {
- return bytes.Replace([]byte(p), []byte{0}, []byte("/"), -1)
-}
-
-func splitKey(k []byte) ([]byte, []byte) {
- foundBytes := false
- i := len(k) - 1
- for {
- if i <= 0 || foundBytes && k[i] == 0 {
- break
- }
- if k[i] != 0 {
- foundBytes = true
- }
- i--
- }
- return append([]byte{}, k[:i]...), k[i:]
+func convertKeyToPath(p []byte) string {
+ return string(bytes.Replace(p, []byte{0}, []byte("/"), -1))
}
diff --git a/vendor/github.com/moby/buildkit/cache/contenthash/path.go b/vendor/github.com/moby/buildkit/cache/contenthash/path.go
index 42b7fd8349c7..ae950f713241 100644
--- a/vendor/github.com/moby/buildkit/cache/contenthash/path.go
+++ b/vendor/github.com/moby/buildkit/cache/contenthash/path.go
@@ -1,108 +1,111 @@
+// This code mostly comes from <https://github.com/cyphar/filepath-securejoin>.
+
+// Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved.
+// Copyright (C) 2017-2024 SUSE LLC. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
package contenthash
import (
"os"
"path/filepath"
+ "strings"
"github.com/pkg/errors"
)
-var (
- errTooManyLinks = errors.New("too many links")
-)
+var errTooManyLinks = errors.New("too many links")
+
+const maxSymlinkLimit = 255
type onSymlinkFunc func(string, string) error
-// rootPath joins a path with a root, evaluating and bounding any
-// symlink to the root directory.
-// This is containerd/continuity/fs RootPath implementation with a callback on
-// resolving the symlink.
-func rootPath(root, path string, cb onSymlinkFunc) (string, error) {
- if path == "" {
+// rootPath joins a path with a root, evaluating and bounding any symlink to
+// the root directory. This is a slightly modified version of SecureJoin from
+// github.com/cyphar/filepath-securejoin, with a callback which we call after
+// each symlink resolution.
+func rootPath(root, unsafePath string, followTrailing bool, cb onSymlinkFunc) (string, error) {
+ if unsafePath == "" {
return root, nil
}
- var linksWalked int // to protect against cycles
- for {
- i := linksWalked
- newpath, err := walkLinks(root, path, &linksWalked, cb)
- if err != nil {
- return "", err
- }
- path = newpath
- if i == linksWalked {
- newpath = filepath.Join("/", newpath)
- if path == newpath {
- return filepath.Join(root, newpath), nil
- }
- path = newpath
- }
- }
-}
-func walkLink(root, path string, linksWalked *int, cb onSymlinkFunc) (newpath string, islink bool, err error) {
- if *linksWalked > 255 {
- return "", false, errTooManyLinks
- }
+ unsafePath = filepath.FromSlash(unsafePath)
+ var (
+ currentPath string
+ linksWalked int
+ )
+ for unsafePath != "" {
+ // Windows-specific: remove any drive letters from the path.
+ if v := filepath.VolumeName(unsafePath); v != "" {
+ unsafePath = unsafePath[len(v):]
+ }
- path = filepath.Join("/", path)
- if path == "/" {
- return path, false, nil
- }
- realPath := filepath.Join(root, path)
+ // Remove any unnecessary trailing slashes.
+ unsafePath = strings.TrimSuffix(unsafePath, string(filepath.Separator))
- fi, err := os.Lstat(realPath)
- if err != nil {
- // If path does not yet exist, treat as non-symlink
- if errors.Is(err, os.ErrNotExist) {
- return path, false, nil
+ // Get the next path component.
+ var part string
+ if i := strings.IndexRune(unsafePath, filepath.Separator); i == -1 {
+ part, unsafePath = unsafePath, ""
+ } else {
+ part, unsafePath = unsafePath[:i], unsafePath[i+1:]
}
- return "", false, err
- }
- if fi.Mode()&os.ModeSymlink == 0 {
- return path, false, nil
- }
- newpath, err = os.Readlink(realPath)
- if err != nil {
- return "", false, err
- }
- if cb != nil {
- if err := cb(path, newpath); err != nil {
- return "", false, err
- }
- }
- *linksWalked++
- return newpath, true, nil
-}
-func walkLinks(root, path string, linksWalked *int, cb onSymlinkFunc) (string, error) {
- switch dir, file := filepath.Split(path); {
- case dir == "":
- newpath, _, err := walkLink(root, file, linksWalked, cb)
- return newpath, err
- case file == "":
- if os.IsPathSeparator(dir[len(dir)-1]) {
- if dir == "/" {
- return dir, nil
- }
- return walkLinks(root, dir[:len(dir)-1], linksWalked, cb)
+ // Apply the component lexically to the path we are building. path does
+ // not contain any symlinks, and we are lexically dealing with a single
+ // component, so it's okay to do filepath.Clean here.
+ nextPath := filepath.Join(string(filepath.Separator), currentPath, part)
+ if nextPath == string(filepath.Separator) {
+ // If we end up back at the root, we don't need to re-evaluate /.
+ currentPath = ""
+ continue
}
- newpath, _, err := walkLink(root, dir, linksWalked, cb)
- return newpath, err
- default:
- newdir, err := walkLinks(root, dir, linksWalked, cb)
- if err != nil {
+ fullPath := root + string(filepath.Separator) + nextPath
+
+ // Figure out whether the path is a symlink.
+ fi, err := os.Lstat(fullPath)
+ if err != nil && !errors.Is(err, os.ErrNotExist) {
return "", err
}
- newpath, islink, err := walkLink(root, filepath.Join(newdir, file), linksWalked, cb)
+ // Treat non-existent path components the same as non-symlinks (we
+ // can't do any better here).
+ if errors.Is(err, os.ErrNotExist) || fi.Mode()&os.ModeSymlink == 0 {
+ currentPath = nextPath
+ continue
+ }
+ // Don't resolve the final component with !followTrailing.
+ if !followTrailing && unsafePath == "" {
+ currentPath = nextPath
+ break
+ }
+
+ // It's a symlink, so get its contents and expand it by prepending it
+ // to the yet-unparsed path.
+ linksWalked++
+ if linksWalked > maxSymlinkLimit {
+ return "", errTooManyLinks
+ }
+
+ dest, err := os.Readlink(fullPath)
if err != nil {
return "", err
}
- if !islink {
- return newpath, nil
+ if cb != nil {
+ if err := cb(nextPath, dest); err != nil {
+ return "", err
+ }
}
- if filepath.IsAbs(newpath) {
- return newpath, nil
+
+ unsafePath = dest + string(filepath.Separator) + unsafePath
+ // Absolute symlinks reset any work we've already done.
+ if filepath.IsAbs(dest) {
+ currentPath = ""
}
- return filepath.Join(newdir, newpath), nil
}
+
+ // There should be no lexical components left in path here, but just for
+ // safety do a filepath.Clean before the join.
+ finalPath := filepath.Join(string(filepath.Separator), currentPath)
+ return filepath.Join(root, finalPath), nil
}
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 9adbc22b99fc..27bc31dfd397 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -577,7 +577,7 @@ github.com/mistifyio/go-zfs/v3
# github.com/mitchellh/hashstructure/v2 v2.0.2
## explicit; go 1.14
github.com/mitchellh/hashstructure/v2
-# github.com/moby/buildkit v0.11.7-0.20240124010513-435cb77e369c => github.com/SUSE/buildkit v0.0.0-20241218053907-cd804dd86389
+# github.com/moby/buildkit v0.11.7-0.20240124010513-435cb77e369c => github.com/SUSE/buildkit v0.0.0-20241218053911-6b814972ef19
## explicit; go 1.18
github.com/moby/buildkit/api/services/control
github.com/moby/buildkit/api/types
@@ -1313,4 +1313,4 @@ k8s.io/klog/v2/internal/severity
# resenje.org/singleflight v0.3.0
## explicit; go 1.18
resenje.org/singleflight
-# github.com/moby/buildkit => github.com/SUSE/buildkit v0.0.0-20241218053907-cd804dd86389
+# github.com/moby/buildkit => github.com/SUSE/buildkit v0.0.0-20241218053911-6b814972ef19
--
2.48.1

View File

@@ -1,54 +0,0 @@
From 12c8b7a22f7140b5b4d2c87a7e5d70da082fe558 Mon Sep 17 00:00:00 2001
From: Aleksa Sarai <cyphar@cyphar.com>
Date: Wed, 19 Jun 2024 16:30:49 +1000
Subject: [PATCH 08/13] bsc1214855: volume: use AtomicWriteFile to save volume
options
If the system (or Docker) crashes while saivng the volume options, on
restart the daemon will error out when trying to read the options file
because it doesn't contain valid JSON.
In such a crash scenario, the new volume will be treated as though it
has the default options configuration. This is not ideal, but volumes
created on very old Docker versions (pre-1.11[1], circa 2016) do not
have opts.json and so doing some kind of cleanup when loading the volume
store (even if we take care to only delete empty volumes) could delete
existing volumes carried over from very old Docker versions that users
would not expect to disappear.
Ultimately, if a user creates a volume and the system crashes, a volume
that has the wrong config is better than Docker not being able to start.
[1]: commit b05b2370757d ("Support mount opts for `local` volume driver")
SUSE-Bugs: https://bugzilla.suse.com/show_bug.cgi?id=1214855
(Cherry-picked from commit b4c20da143502e5fc21cc4996b63e83691c515bf.)
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
---
volume/local/local.go | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/volume/local/local.go b/volume/local/local.go
index b4f3a3669a84..077b26f1b813 100644
--- a/volume/local/local.go
+++ b/volume/local/local.go
@@ -16,6 +16,7 @@ import (
"github.com/docker/docker/daemon/names"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/idtools"
+ "github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/quota"
"github.com/docker/docker/volume"
"github.com/pkg/errors"
@@ -381,7 +382,7 @@ func (v *localVolume) saveOpts() error {
if err != nil {
return err
}
- err = os.WriteFile(filepath.Join(v.rootPath, "opts.json"), b, 0600)
+ err = ioutils.AtomicWriteFile(filepath.Join(v.rootPath, "opts.json"), b, 0o600)
if err != nil {
return errdefs.System(errors.Wrap(err, "error while persisting volume options"))
}
--
2.48.1

View File

@@ -1,209 +0,0 @@
From 49605be604df94e216168288cdbcae0fda04d641 Mon Sep 17 00:00:00 2001
From: Jameson Hyde <jameson.hyde@docker.com>
Date: Mon, 26 Nov 2018 14:15:22 -0500
Subject: [PATCH 09/13] CVE-2024-41110: AuthZ plugin securty fixes
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
This is a backport of the following upstream patches:
* 2ac8a479c53d ("Authz plugin security fixes for 0-length content and path validation")
* 5282cb25d09d ("If url includes scheme, urlPath will drop hostname, which would not match the auth check")
Signed-off-by: Jameson Hyde <jameson.hyde@docker.com>
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
---
pkg/authorization/authz.go | 38 +++++++++++--
pkg/authorization/authz_unix_test.go | 84 +++++++++++++++++++++++++++-
2 files changed, 115 insertions(+), 7 deletions(-)
diff --git a/pkg/authorization/authz.go b/pkg/authorization/authz.go
index 590ac8dddd88..68ed8bbdaf97 100644
--- a/pkg/authorization/authz.go
+++ b/pkg/authorization/authz.go
@@ -7,6 +7,8 @@ import (
"io"
"mime"
"net/http"
+ "net/url"
+ "regexp"
"strings"
"github.com/docker/docker/pkg/ioutils"
@@ -52,10 +54,23 @@ type Ctx struct {
authReq *Request
}
+func isChunked(r *http.Request) bool {
+ // RFC 7230 specifies that content length is to be ignored if Transfer-Encoding is chunked
+ if strings.EqualFold(r.Header.Get("Transfer-Encoding"), "chunked") {
+ return true
+ }
+ for _, v := range r.TransferEncoding {
+ if strings.EqualFold(v, "chunked") {
+ return true
+ }
+ }
+ return false
+}
+
// AuthZRequest authorized the request to the docker daemon using authZ plugins
func (ctx *Ctx) AuthZRequest(w http.ResponseWriter, r *http.Request) error {
var body []byte
- if sendBody(ctx.requestURI, r.Header) && r.ContentLength > 0 && r.ContentLength < maxBodySize {
+ if sendBody(ctx.requestURI, r.Header) && (r.ContentLength > 0 || isChunked(r)) && r.ContentLength < maxBodySize {
var err error
body, r.Body, err = drainBody(r.Body)
if err != nil {
@@ -108,7 +123,6 @@ func (ctx *Ctx) AuthZResponse(rm ResponseModifier, r *http.Request) error {
if sendBody(ctx.requestURI, rm.Header()) {
ctx.authReq.ResponseBody = rm.RawBody()
}
-
for _, plugin := range ctx.plugins {
logrus.Debugf("AuthZ response using plugin %s", plugin.Name())
@@ -146,10 +160,26 @@ func drainBody(body io.ReadCloser) ([]byte, io.ReadCloser, error) {
return nil, newBody, err
}
+func isAuthEndpoint(urlPath string) (bool, error) {
+ // eg www.test.com/v1.24/auth/optional?optional1=something&optional2=something (version optional)
+ matched, err := regexp.MatchString(`^[^\/]*\/(v\d[\d\.]*\/)?auth.*`, urlPath)
+ if err != nil {
+ return false, err
+ }
+ return matched, nil
+}
+
// sendBody returns true when request/response body should be sent to AuthZPlugin
-func sendBody(url string, header http.Header) bool {
+func sendBody(inURL string, header http.Header) bool {
+ u, err := url.Parse(inURL)
+ // Assume no if the URL cannot be parsed - an empty request will still be forwarded to the plugin and should be rejected
+ if err != nil {
+ return false
+ }
+
// Skip body for auth endpoint
- if strings.HasSuffix(url, "/auth") {
+ isAuth, err := isAuthEndpoint(u.Path)
+ if isAuth || err != nil {
return false
}
diff --git a/pkg/authorization/authz_unix_test.go b/pkg/authorization/authz_unix_test.go
index 835cb703839b..8bfe44e1a840 100644
--- a/pkg/authorization/authz_unix_test.go
+++ b/pkg/authorization/authz_unix_test.go
@@ -175,8 +175,8 @@ func TestDrainBody(t *testing.T) {
func TestSendBody(t *testing.T) {
var (
- url = "nothing.com"
testcases = []struct {
+ url string
contentType string
expected bool
}{
@@ -220,15 +220,93 @@ func TestSendBody(t *testing.T) {
contentType: "",
expected: false,
},
+ {
+ url: "nothing.com/auth",
+ contentType: "",
+ expected: false,
+ },
+ {
+ url: "nothing.com/auth",
+ contentType: "application/json;charset=UTF8",
+ expected: false,
+ },
+ {
+ url: "nothing.com/auth?p1=test",
+ contentType: "application/json;charset=UTF8",
+ expected: false,
+ },
+ {
+ url: "nothing.com/test?p1=/auth",
+ contentType: "application/json;charset=UTF8",
+ expected: true,
+ },
+ {
+ url: "nothing.com/something/auth",
+ contentType: "application/json;charset=UTF8",
+ expected: true,
+ },
+ {
+ url: "nothing.com/auth/test",
+ contentType: "application/json;charset=UTF8",
+ expected: false,
+ },
+ {
+ url: "nothing.com/v1.24/auth/test",
+ contentType: "application/json;charset=UTF8",
+ expected: false,
+ },
+ {
+ url: "nothing.com/v1/auth/test",
+ contentType: "application/json;charset=UTF8",
+ expected: false,
+ },
+ {
+ url: "www.nothing.com/v1.24/auth/test",
+ contentType: "application/json;charset=UTF8",
+ expected: false,
+ },
+ {
+ url: "https://www.nothing.com/v1.24/auth/test",
+ contentType: "application/json;charset=UTF8",
+ expected: false,
+ },
+ {
+ url: "http://nothing.com/v1.24/auth/test",
+ contentType: "application/json;charset=UTF8",
+ expected: false,
+ },
+ {
+ url: "www.nothing.com/test?p1=/auth",
+ contentType: "application/json;charset=UTF8",
+ expected: true,
+ },
+ {
+ url: "http://www.nothing.com/test?p1=/auth",
+ contentType: "application/json;charset=UTF8",
+ expected: true,
+ },
+ {
+ url: "www.nothing.com/something/auth",
+ contentType: "application/json;charset=UTF8",
+ expected: true,
+ },
+ {
+ url: "https://www.nothing.com/something/auth",
+ contentType: "application/json;charset=UTF8",
+ expected: true,
+ },
}
)
for _, testcase := range testcases {
header := http.Header{}
header.Set("Content-Type", testcase.contentType)
+ if testcase.url == "" {
+ testcase.url = "nothing.com"
+ }
- if b := sendBody(url, header); b != testcase.expected {
- t.Fatalf("Unexpected Content-Type; Expected: %t, Actual: %t", testcase.expected, b)
+ if b := sendBody(testcase.url, header); b != testcase.expected {
+ t.Fatalf("sendBody failed: url: %s, content-type: %s; Expected: %t, Actual: %t", testcase.url, testcase.contentType, testcase.expected, b)
}
}
}
--
2.48.1

View File

@@ -1,139 +0,0 @@
From 60abff4c864c08b4ea05d96a304f6cf3f0cca787 Mon Sep 17 00:00:00 2001
From: Albin Kerouanton <albinker@gmail.com>
Date: Tue, 10 Oct 2023 01:13:25 +0200
Subject: [PATCH 10/13] CVE-2024-29018: libnet: Don't forward to upstream
resolvers on internal nw
Commit cbc2a71c2 makes `connect` syscall fail fast when a container is
only attached to an internal network. Thanks to that, if such a
container tries to resolve an "external" domain, the embedded resolver
returns an error immediately instead of waiting for a timeout.
This commit makes sure the embedded resolver doesn't even try to forward
to upstream servers.
Co-authored-by: Albin Kerouanton <albinker@gmail.com>
Signed-off-by: Rob Murray <rob.murray@docker.com>
(Cherry-picked from commit 790c3039d0ca5ed86ecd099b4b571496607628bc.)
[Drop test additions and test-related patches.]
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
---
libnetwork/endpoint.go | 12 +++++++++++-
libnetwork/resolver.go | 17 +++++++++++++----
libnetwork/sandbox_dns_unix.go | 6 +++++-
3 files changed, 29 insertions(+), 6 deletions(-)
diff --git a/libnetwork/endpoint.go b/libnetwork/endpoint.go
index b9903bb90188..b90500ce97a1 100644
--- a/libnetwork/endpoint.go
+++ b/libnetwork/endpoint.go
@@ -520,8 +520,13 @@ func (ep *Endpoint) sbJoin(sb *Sandbox, options ...EndpointOption) (err error) {
return sb.setupDefaultGW()
}
- moveExtConn := sb.getGatewayEndpoint() != extEp
+ currentExtEp := sb.getGatewayEndpoint()
+ // Enable upstream forwarding if the sandbox gained external connectivity.
+ if sb.resolver != nil {
+ sb.resolver.SetForwardingPolicy(currentExtEp != nil)
+ }
+ moveExtConn := currentExtEp != extEp
if moveExtConn {
if extEp != nil {
logrus.Debugf("Revoking external connectivity on endpoint %s (%s)", extEp.Name(), extEp.ID())
@@ -751,6 +756,11 @@ func (ep *Endpoint) sbLeave(sb *Sandbox, force bool, options ...EndpointOption)
// New endpoint providing external connectivity for the sandbox
extEp = sb.getGatewayEndpoint()
+ // Disable upstream forwarding if the sandbox lost external connectivity.
+ if sb.resolver != nil {
+ sb.resolver.SetForwardingPolicy(extEp != nil)
+ }
+
if moveExtConn && extEp != nil {
logrus.Debugf("Programming external connectivity on endpoint %s (%s)", extEp.Name(), extEp.ID())
extN, err := extEp.getNetworkFromStore()
diff --git a/libnetwork/resolver.go b/libnetwork/resolver.go
index ab19b7b08fc0..70ca33b53590 100644
--- a/libnetwork/resolver.go
+++ b/libnetwork/resolver.go
@@ -7,6 +7,7 @@ import (
"net"
"strings"
"sync"
+ "sync/atomic"
"time"
"github.com/docker/docker/libnetwork/types"
@@ -69,7 +70,7 @@ type Resolver struct {
tcpListen *net.TCPListener
err error
listenAddress string
- proxyDNS bool
+ proxyDNS atomic.Bool
startCh chan struct{}
logger *logrus.Logger
@@ -79,15 +80,17 @@ type Resolver struct {
// NewResolver creates a new instance of the Resolver
func NewResolver(address string, proxyDNS bool, backend DNSBackend) *Resolver {
- return &Resolver{
+ r := &Resolver{
backend: backend,
- proxyDNS: proxyDNS,
listenAddress: address,
err: fmt.Errorf("setup not done yet"),
startCh: make(chan struct{}, 1),
fwdSem: semaphore.NewWeighted(maxConcurrent),
logInverval: rate.Sometimes{Interval: logInterval},
}
+ r.proxyDNS.Store(proxyDNS)
+
+ return r
}
func (r *Resolver) log() *logrus.Logger {
@@ -192,6 +195,12 @@ func (r *Resolver) SetExtServers(extDNS []extDNSEntry) {
}
}
+// SetForwardingPolicy re-configures the embedded DNS resolver to either enable or disable forwarding DNS queries to
+// external servers.
+func (r *Resolver) SetForwardingPolicy(policy bool) {
+ r.proxyDNS.Store(policy)
+}
+
// NameServer returns the IP of the DNS resolver for the containers.
func (r *Resolver) NameServer() string {
return r.listenAddress
@@ -407,7 +416,7 @@ func (r *Resolver) serveDNS(w dns.ResponseWriter, query *dns.Msg) {
return
}
- if r.proxyDNS {
+ if r.proxyDNS.Load() {
// If the user sets ndots > 0 explicitly and the query is
// in the root domain don't forward it out. We will return
// failure and let the client retry with the search domain
diff --git a/libnetwork/sandbox_dns_unix.go b/libnetwork/sandbox_dns_unix.go
index 2218c6960e45..e3bb9abce93b 100644
--- a/libnetwork/sandbox_dns_unix.go
+++ b/libnetwork/sandbox_dns_unix.go
@@ -28,7 +28,11 @@ const (
func (sb *Sandbox) startResolver(restore bool) {
sb.resolverOnce.Do(func() {
var err error
- sb.resolver = NewResolver(resolverIPSandbox, true, sb)
+ // The resolver is started with proxyDNS=false if the sandbox does not currently
+ // have a gateway. So, if the Sandbox is only connected to an 'internal' network,
+ // it will not forward DNS requests to external resolvers. The resolver's
+ // proxyDNS setting is then updated as network Endpoints are added/removed.
+ sb.resolver = NewResolver(resolverIPSandbox, sb.getGatewayEndpoint() != nil, sb)
defer func() {
if err != nil {
sb.resolver = nil
--
2.48.1

View File

@@ -1,40 +0,0 @@
From bccf3bf09d4f3083e58ad33fd4213d286d0ff1e4 Mon Sep 17 00:00:00 2001
From: Aleksa Sarai <cyphar@cyphar.com>
Date: Tue, 25 Mar 2025 12:02:42 +1100
Subject: [PATCH 11/13] CVE-2025-22868: vendor: jws: split token into fixed
number of parts
Thanks to 'jub0bs' for reporting this issue.
Fixes: CVE-2025-22868
Reviewed-on: https://go-review.googlesource.com/c/oauth2/+/652155
Reviewed-by: Damien Neil <dneil@google.com>
Reviewed-by: Roland Shoemaker <roland@golang.org>
(Cherry-picked from golang.org/x/oauth2@681b4d8edca1bcfea5bce685d77ea7b82ed3e7b3.)
SUSE-Bugs: https://bugzilla.suse.com/show_bug.cgi?id=1239185
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
---
vendor/golang.org/x/oauth2/jws/jws.go | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/vendor/golang.org/x/oauth2/jws/jws.go b/vendor/golang.org/x/oauth2/jws/jws.go
index 95015648b43f..6f03a49d3120 100644
--- a/vendor/golang.org/x/oauth2/jws/jws.go
+++ b/vendor/golang.org/x/oauth2/jws/jws.go
@@ -165,11 +165,11 @@ func Encode(header *Header, c *ClaimSet, key *rsa.PrivateKey) (string, error) {
// Verify tests whether the provided JWT token's signature was produced by the private key
// associated with the supplied public key.
func Verify(token string, key *rsa.PublicKey) error {
- parts := strings.Split(token, ".")
- if len(parts) != 3 {
+ if strings.Count(token, ".") != 2 {
return errors.New("jws: invalid token received, token must have 3 parts")
}
+ parts := strings.SplitN(token, ".", 3)
signedContent := parts[0] + "." + parts[1]
signatureString, err := base64.RawURLEncoding.DecodeString(parts[2])
if err != nil {
--
2.48.1

View File

@@ -1,65 +0,0 @@
From 0392c617b8e75f0b59a922f95c691fdd05eaf99f Mon Sep 17 00:00:00 2001
From: Aleksa Sarai <cyphar@cyphar.com>
Date: Thu, 21 Nov 2024 20:00:07 +1100
Subject: [PATCH 11/11] TESTS: backport fixes for integration tests
We need a couple of patches to make the tests work on SLES:
* 143b3b2ef3d0 ("test: update registry version to latest")
* 1a453abfb172 ("integration-cli: don't skip AppArmor tests on SLES")
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
---
Dockerfile | 2 +-
integration-cli/requirements_test.go | 3 ---
testutil/registry/registry.go | 4 +++-
3 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/Dockerfile b/Dockerfile
index 463d5cfc1a86..7a23962af09b 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -59,7 +59,7 @@ WORKDIR /go/src/github.com/docker/distribution
# from the https://github.com/docker/distribution repository. This version of
# the registry is used to test both schema 1 and schema 2 manifests. Generally,
# the version specified here should match a current release.
-ARG REGISTRY_VERSION=v2.3.0
+ARG REGISTRY_VERSION=v2.8.2
# REGISTRY_VERSION_SCHEMA1 specifies the version of the registry to build and
# install from the https://github.com/docker/distribution repository. This is
# an older (pre v2.3.0) version of the registry that only supports schema1
diff --git a/integration-cli/requirements_test.go b/integration-cli/requirements_test.go
index 2313272d7704..e5f72397e1bc 100644
--- a/integration-cli/requirements_test.go
+++ b/integration-cli/requirements_test.go
@@ -85,9 +85,6 @@ func Network() bool {
}
func Apparmor() bool {
- if strings.HasPrefix(testEnv.DaemonInfo.OperatingSystem, "SUSE Linux Enterprise Server ") {
- return false
- }
buf, err := os.ReadFile("/sys/module/apparmor/parameters/enabled")
return err == nil && len(buf) > 1 && buf[0] == 'Y'
}
diff --git a/testutil/registry/registry.go b/testutil/registry/registry.go
index 9213db2ba21a..d8bfe17678a4 100644
--- a/testutil/registry/registry.go
+++ b/testutil/registry/registry.go
@@ -107,10 +107,12 @@ http:
}
binary := V2binary
+ args := []string{"serve", confPath}
if c.schema1 {
binary = V2binarySchema1
+ args = []string{confPath}
}
- cmd := exec.Command(binary, confPath)
+ cmd := exec.Command(binary, args...)
cmd.Stdout = c.stdout
cmd.Stderr = c.stderr
if err := cmd.Start(); err != nil {
--
2.47.1

View File

@@ -1,137 +0,0 @@
From 10fea3f3ff69a2081242d373adef2fe214071a93 Mon Sep 17 00:00:00 2001
From: Aleksa Sarai <cyphar@cyphar.com>
Date: Tue, 25 Mar 2025 12:05:38 +1100
Subject: [PATCH 12/13] CVE-2025-22869: vendor: ssh: limit the size of the
internal packet queue while waiting for KEX
In the SSH protocol, clients and servers execute the key exchange to
generate one-time session keys used for encryption and authentication.
The key exchange is performed initially after the connection is
established and then periodically after a configurable amount of data.
While a key exchange is in progress, we add the received packets to an
internal queue until we receive SSH_MSG_KEXINIT from the other side.
This can result in high memory usage if the other party is slow to
respond to the SSH_MSG_KEXINIT packet, or memory exhaustion if a
malicious client never responds to an SSH_MSG_KEXINIT packet during a
large file transfer.
We now limit the internal queue to 64 packets: this means 2MB with the
typical 32KB packet size.
When the internal queue is full we block further writes until the
pending key exchange is completed or there is a read or write error.
Thanks to Yuichi Watanabe for reporting this issue.
Fixes: CVE-2025-22869
Reviewed-on: https://go-review.googlesource.com/c/crypto/+/652135
Reviewed-by: Neal Patel <nealpatel@google.com>
Reviewed-by: Roland Shoemaker <roland@golang.org>
(Cherry-picked from golang.org/x/crypto@7292932d45d55c7199324ab0027cc86e8198aa22.)
SUSE-Bugs: https://bugzilla.suse.com/show_bug.cgi?id=1239322
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
---
vendor/golang.org/x/crypto/ssh/handshake.go | 47 ++++++++++++++++-----
1 file changed, 37 insertions(+), 10 deletions(-)
diff --git a/vendor/golang.org/x/crypto/ssh/handshake.go b/vendor/golang.org/x/crypto/ssh/handshake.go
index 70a7369ff913..e14eb6cba077 100644
--- a/vendor/golang.org/x/crypto/ssh/handshake.go
+++ b/vendor/golang.org/x/crypto/ssh/handshake.go
@@ -24,6 +24,11 @@ const debugHandshake = false
// quickly.
const chanSize = 16
+// maxPendingPackets sets the maximum number of packets to queue while waiting
+// for KEX to complete. This limits the total pending data to maxPendingPackets
+// * maxPacket bytes, which is ~16.8MB.
+const maxPendingPackets = 64
+
// keyingTransport is a packet based transport that supports key
// changes. It need not be thread-safe. It should pass through
// msgNewKeys in both directions.
@@ -58,11 +63,19 @@ type handshakeTransport struct {
incoming chan []byte
readError error
- mu sync.Mutex
- writeError error
- sentInitPacket []byte
- sentInitMsg *kexInitMsg
- pendingPackets [][]byte // Used when a key exchange is in progress.
+ mu sync.Mutex
+ // Condition for the above mutex. It is used to notify a completed key
+ // exchange or a write failure. Writes can wait for this condition while a
+ // key exchange is in progress.
+ writeCond *sync.Cond
+ writeError error
+ sentInitPacket []byte
+ sentInitMsg *kexInitMsg
+ // Used to queue writes when a key exchange is in progress. The length is
+ // limited by pendingPacketsSize. Once full, writes will block until the key
+ // exchange is completed or an error occurs. If not empty, it is emptied
+ // all at once when the key exchange is completed in kexLoop.
+ pendingPackets [][]byte
writePacketsLeft uint32
writeBytesLeft int64
@@ -114,6 +127,7 @@ func newHandshakeTransport(conn keyingTransport, config *Config, clientVersion,
config: config,
}
+ t.writeCond = sync.NewCond(&t.mu)
t.resetReadThresholds()
t.resetWriteThresholds()
@@ -236,6 +250,7 @@ func (t *handshakeTransport) recordWriteError(err error) {
defer t.mu.Unlock()
if t.writeError == nil && err != nil {
t.writeError = err
+ t.writeCond.Broadcast()
}
}
@@ -339,6 +354,8 @@ write:
}
}
t.pendingPackets = t.pendingPackets[:0]
+ // Unblock writePacket if waiting for KEX.
+ t.writeCond.Broadcast()
t.mu.Unlock()
}
@@ -526,11 +543,20 @@ func (t *handshakeTransport) writePacket(p []byte) error {
}
if t.sentInitMsg != nil {
- // Copy the packet so the writer can reuse the buffer.
- cp := make([]byte, len(p))
- copy(cp, p)
- t.pendingPackets = append(t.pendingPackets, cp)
- return nil
+ if len(t.pendingPackets) < maxPendingPackets {
+ // Copy the packet so the writer can reuse the buffer.
+ cp := make([]byte, len(p))
+ copy(cp, p)
+ t.pendingPackets = append(t.pendingPackets, cp)
+ return nil
+ }
+ for t.sentInitMsg != nil {
+ // Block and wait for KEX to complete or an error.
+ t.writeCond.Wait()
+ if t.writeError != nil {
+ return t.writeError
+ }
+ }
}
if t.writeBytesLeft > 0 {
@@ -547,6 +573,7 @@ func (t *handshakeTransport) writePacket(p []byte) error {
if err := t.pushPacket(p); err != nil {
t.writeError = err
+ t.writeCond.Broadcast()
}
return nil
--
2.48.1

View File

@@ -1,65 +0,0 @@
From 95339a6a838e2b45af819e5492cbb6c7c0fbac0e Mon Sep 17 00:00:00 2001
From: Aleksa Sarai <cyphar@cyphar.com>
Date: Thu, 21 Nov 2024 20:00:07 +1100
Subject: [PATCH 13/13] TESTS: backport fixes for integration tests
We need a couple of patches to make the tests work on SLES:
* 143b3b2ef3d0 ("test: update registry version to latest")
* 1a453abfb172 ("integration-cli: don't skip AppArmor tests on SLES")
Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
---
Dockerfile | 2 +-
integration-cli/requirements_test.go | 3 ---
testutil/registry/registry.go | 4 +++-
3 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/Dockerfile b/Dockerfile
index 463d5cfc1a86..7a23962af09b 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -59,7 +59,7 @@ WORKDIR /go/src/github.com/docker/distribution
# from the https://github.com/docker/distribution repository. This version of
# the registry is used to test both schema 1 and schema 2 manifests. Generally,
# the version specified here should match a current release.
-ARG REGISTRY_VERSION=v2.3.0
+ARG REGISTRY_VERSION=v2.8.2
# REGISTRY_VERSION_SCHEMA1 specifies the version of the registry to build and
# install from the https://github.com/docker/distribution repository. This is
# an older (pre v2.3.0) version of the registry that only supports schema1
diff --git a/integration-cli/requirements_test.go b/integration-cli/requirements_test.go
index 2313272d7704..e5f72397e1bc 100644
--- a/integration-cli/requirements_test.go
+++ b/integration-cli/requirements_test.go
@@ -85,9 +85,6 @@ func Network() bool {
}
func Apparmor() bool {
- if strings.HasPrefix(testEnv.DaemonInfo.OperatingSystem, "SUSE Linux Enterprise Server ") {
- return false
- }
buf, err := os.ReadFile("/sys/module/apparmor/parameters/enabled")
return err == nil && len(buf) > 1 && buf[0] == 'Y'
}
diff --git a/testutil/registry/registry.go b/testutil/registry/registry.go
index 9213db2ba21a..d8bfe17678a4 100644
--- a/testutil/registry/registry.go
+++ b/testutil/registry/registry.go
@@ -107,10 +107,12 @@ http:
}
binary := V2binary
+ args := []string{"serve", confPath}
if c.schema1 {
binary = V2binarySchema1
+ args = []string{confPath}
}
- cmd := exec.Command(binary, confPath)
+ cmd := exec.Command(binary, args...)
cmd.Stdout = c.stdout
cmd.Stderr = c.stderr
if err := cmd.Start(); err != nil {
--
2.48.1

View File

@@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fd0f81752a02e20b611f95a35718bdc44eb1e203e0fd80d7afb87dfd8135c300
size 6445376

BIN
docker-buildx-0.19.2.tar.xz (Stored with Git LFS)

Binary file not shown.

BIN
docker-buildx-0.22.0.tar.xz (Stored with Git LFS)

Binary file not shown.