Accepting request 1224000 from devel:kubic

OBS-URL: https://build.opensuse.org/request/show/1224000
OBS-URL: https://build.opensuse.org/package/show/openSUSE:Factory/velero?expand=0&rev=28
This commit is contained in:
Ana Guerrero 2024-11-14 15:08:59 +00:00 committed by Git OBS Bridge
commit 02bdc9d434
8 changed files with 111 additions and 23 deletions

View File

@ -3,9 +3,9 @@
<param name="url">https://github.com/vmware-tanzu/velero</param>
<param name="scm">git</param>
<param name="exclude">.git</param>
<param name="revision">v1.15.0</param>
<param name="versionformat">@PARENT_TAG@</param>
<param name="versionrewrite-pattern">v(.*)</param>
<param name="revision">v1.14.1</param>
<param name="changesgenerate">enable</param>
</service>
<service name="set_version" mode="manual">

View File

@ -1,4 +1,4 @@
<servicedata>
<service name="tar_scm">
<param name="url">https://github.com/vmware-tanzu/velero</param>
<param name="changesrevision">8afe3cea8b7058f7baaf447b9fb407312c40d2da</param></service></servicedata>
<param name="changesrevision">1d4f1475975b5107ec35f4d19ff17f7d1fcb3edf</param></service></servicedata>

View File

@ -1,3 +0,0 @@
version https://git-lfs.github.com/spec/v1
oid sha256:626826f34f341f26eb4c0a75e2fc6159a951c1c1d92e81afd24a3d89940e913e
size 52891662

3
velero-1.15.0.obscpio Normal file
View File

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:87bfb63e625db6fec559050098e0001f2ccc523f1274671ad7b26a7330b5487b
size 55509518

View File

@ -1,3 +1,96 @@
-------------------------------------------------------------------
Tue Nov 12 06:24:04 UTC 2024 - opensuse_buildservice@ojkastl.de
- Update to version 1.15.0:
Changelog: https://velero.io/docs/v1.15/
Upgrading: https://velero.io/docs/v1.15/upgrade-to-1.15/
* Data mover micro service
Data transfer activities for CSI Snapshot Data Movement are
moved from node-agent pods to dedicate backupPods or
restorePods. This brings many benefits such as:
- This avoids to access volume data through host path, while
host path access is privileged and may involve security
escalations, which are concerned by users.
- This enables users to to control resource (i.e., cpu, memory)
allocations in a granular manner, e.g., control them per
backup/restore of a volume.
- This enhances the resilience, crash of one data movement
activity won't affect others.
- This prevents unnecessary full backup because of host path
changes after workload pods restart.
- For more information, check the design
https://github.com/vmware-tanzu/velero/blob/main/design/Implemented/vgdp-micro-service/vgdp-micro-service.md.
* Item Block concepts and ItemBlockAction (IBA) plugin
Item Block concepts are introduced for resource backups to help
to achieve multiple thread backups. Specifically, correlated
resources are categorized in the same item block and item
blocks could be processed concurrently in multiple threads.
ItemBlockAction plugin is introduced to help Velero to
categorize resources into item blocks. At present, Velero
provides built-in IBAs for pods and PVCs and Velero also
supports customized IBAs for any resources.
In v1.15, Velero doesn't support multiple thread process of
item blocks though item block concepts and IBA plugins are
fully supported. The multiple thread support will be delivered
in future releases.
For more information, check the design
https://github.com/vmware-tanzu/velero/blob/main/design/backup-performance-improvements.md.
* Node selection for repository maintenance job
Repository maintenance are resource consuming tasks, Velero now
allows you to configure the nodes to run repository maintenance
jobs, so that you can run repository maintenance jobs in idle
nodes or avoid them to run in nodes hosting critical workloads.
To support the configuration, a new repository maintenance
configuration configMap is introduced.
For more information, check the document
https://velero.io/docs/v1.15/repository-maintenance/.
* Backup PVC read-only configuration
In 1.15, Velero allows you to configure the data mover
backupPods to read-only mount the backupPVCs. In this way, the
data mover expose process could be significantly accelerated
for some storages (i.e., ceph).
To support the configuration, a new backup PVC configuration
configMap is introduced.
For more information, check the document
https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
* Backup PVC storage class configuration
In 1.15, Velero allows you to configure the storageclass used
by the data mover backupPods. In this way, the provision of
backupPVCs don't need to adhere to the same pattern as workload
PVCs, e.g., for a backupPVC, it only needs one replica,
whereas, the a workload PVC may have multiple replicas.
To support the configuration, the same backup PVC configuration
configMap is used.
For more information, check the document
https://velero.io/docs/v1.15/data-movement-backup-pvc-configuration/.
* Backup repository data cache configuration
The backup repository may need to cache data on the client side
during various repository operations, i.e., read, write,
maintenance, etc. The cache consumes the root file system space
of the pod where the repository access happens.
In 1.15, Velero allows you to configure the total size of the
cache per repository. In this way, if your pod doesn't have
enough space in its root file system, the pod won't be evicted
due to running out of ephemeral storage.
To support the configuration, a new backup repository
configuration configMap is introduced.
For more information, check the document
https://velero.io/docs/v1.15/backup-repository-configuration/.
* Performance improvements
In 1.15, several performance related issues/enhancements are
included, which makes significant performance improvements in
specific scenarios:
- There was a memory leak of Velero server after plugin calls,
now it is fixed, see issue #7925
- The client-burst/client-qps parameters are automatically
inherited to plugins, so that you can use the same velero
server parameters to accelerate the plugin executions when
large number of API server calls happen, see issue #7806
- Maintenance of Kopia repository takes huge memory in
scenarios that huge number of files have been backed up,
Velero 1.15 has included the Kopia upstream enhancement to
fix the problem, see issue #7510
-------------------------------------------------------------------
Fri Sep 13 18:24:59 UTC 2024 - opensuse_buildservice@ojkastl.de

View File

@ -1,4 +1,4 @@
name: velero
version: 1.14.1
mtime: 1724118663
commit: 8afe3cea8b7058f7baaf447b9fb407312c40d2da
version: 1.15.0
mtime: 1730086408
commit: 1d4f1475975b5107ec35f4d19ff17f7d1fcb3edf

View File

@ -17,7 +17,7 @@
Name: velero
Version: 1.14.1
Version: 1.15.0
Release: 0
Summary: Backup program with deduplication and encryption
License: Apache-2.0
@ -25,8 +25,10 @@ Group: Productivity/Archiving/Backup
URL: https://velero.io
Source0: %{name}-%{version}.tar.gz
Source1: vendor.tar.gz
BuildRequires: golang-packaging
BuildRequires: golang(API) = 1.22
BuildRequires: bash-completion
BuildRequires: fish
BuildRequires: go >= 1.22
BuildRequires: zsh
%description
velero is a backup program. It supports verification, encryption,
@ -93,8 +95,8 @@ mkdir -p %{buildroot}%{_datarootdir}/bash-completion/completions
%{buildroot}/%{_bindir}/%{name} completion bash > %{buildroot}%{_datarootdir}/bash-completion/completions/%{name}
# create the zsh completion file
mkdir -p %{buildroot}%{_datarootdir}/zsh_completion.d
%{buildroot}/%{_bindir}/%{name} completion zsh > %{buildroot}%{_datarootdir}/zsh_completion.d/_%{name}
mkdir -p %{buildroot}%{_datarootdir}/zsh/site-functions
%{buildroot}/%{_bindir}/%{name} completion zsh > %{buildroot}%{_datarootdir}/zsh/site-functions/_%{name}
# create the fish completion file
mkdir -p %{buildroot}%{_datadir}/fish/vendor_completions.d
@ -106,19 +108,12 @@ mkdir -p %{buildroot}%{_datadir}/fish/vendor_completions.d
%{_bindir}/%{name}
%files bash-completion
%defattr(-,root,root)
%dir %{_datarootdir}/bash-completion/completions/
%{_datarootdir}/bash-completion/completions/%{name}
%files zsh-completion
%defattr(-,root,root)
%dir %{_datarootdir}/zsh_completion.d/
%{_datarootdir}/zsh_completion.d/_%{name}
%{_datarootdir}/zsh/site-functions/_%{name}
%files fish-completion
%defattr(-,root,root)
%dir %{_datarootdir}/fish
%dir %{_datarootdir}/fish/vendor_completions.d
%{_datarootdir}/fish/vendor_completions.d/%{name}.fish
%changelog

View File

@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ef7bf5f1bc7a8759b4eccb5d9f4893a7413357e8f88b874c12ef80655b19ebcd
size 15266132
oid sha256:0a3a9cac026bed85d52e415ba3e9b41d9f75ef1cc15e9b0ab465305d7011af6e
size 15419451