Accepting request 728458 from home:hmzhao:branches:openSUSE:Factory

upgrade lvm2 from 2.02.180 to 2.03.05. 
this upgrade only for opensuse & sles-15sp2

OBS-URL: https://build.opensuse.org/request/show/728458
OBS-URL: https://build.opensuse.org/package/show/Base:System/lvm2?expand=0&rev=249
This commit is contained in:
Gang He 2019-09-05 10:03:51 +00:00 committed by Git OBS Bridge
parent 0badf76b5c
commit 5c670ebc3c
8 changed files with 311 additions and 406 deletions

View File

@ -1,39 +0,0 @@
From 6ff44e96eb804f9024bf3f606d207bd863f0e672 Mon Sep 17 00:00:00 2001
From: Eric Ren <zren@suse.com>
Date: Wed, 13 Dec 2017 18:53:00 +0800
Subject: [PATCH] test: lvmetad_dump always timed out when using nc
lvmetad_dump uses either "socat" or "nc" to communicate
with lvmetad. But when using "nc" if "socat" is not
available, nc will listen forever by default, causing the
testcase timed out.
Signed-off-by: Eric Ren <zren@suse.com>
---
test/lib/aux.sh | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/test/lib/aux.sh b/test/lib/aux.sh
index 6bc7bd47e..4603c1504 100644
--- a/test/lib/aux.sh
+++ b/test/lib/aux.sh
@@ -243,14 +243,14 @@ lvmetad_talk() {
local use=nc
if type -p socat >& /dev/null; then
use=socat
- elif echo | not nc -U "$TESTDIR/lvmetad.socket" ; then
+ elif echo | not nc -w 1 -U "$TESTDIR/lvmetad.socket" ; then
echo "WARNING: Neither socat nor nc -U seems to be available." 1>&2
echo "## failed to contact lvmetad."
return 1
fi
if test "$use" = nc ; then
- nc -U "$TESTDIR/lvmetad.socket"
+ nc -w 1 -U "$TESTDIR/lvmetad.socket"
else
socat "unix-connect:$TESTDIR/lvmetad.socket" -
fi | tee -a lvmetad-talk.txt
--
2.13.6

View File

@ -0,0 +1,31 @@
From 559cf0cd1e226baf63a98c39572264fbf5c3f6b4 Mon Sep 17 00:00:00 2001
From: David Teigland <teigland@redhat.com>
Date: Tue, 23 Apr 2019 09:39:42 -0500
Subject: [PATCH] devices: drop open error message
This open error is being printed in more common,
non-error circumstances than expected. After a
number of complaints make it only a debug message.
---
lib/device/dev-io.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/lib/device/dev-io.c b/lib/device/dev-io.c
index 2a83a9657..6996a44dc 100644
--- a/lib/device/dev-io.c
+++ b/lib/device/dev-io.c
@@ -572,10 +572,7 @@ int dev_open_flags(struct device *dev, int flags, int direct, int quiet)
}
}
#endif
- if (quiet)
- log_sys_debug("open", name);
- else
- log_sys_error("open", name);
+ log_sys_debug("open", name);
dev->flags |= DEV_OPEN_FAILURE;
return 0;
--
2.21.0

View File

@ -1,44 +0,0 @@
commit 0402acbbb9f8f6066a3f7899e8cc3ae72b84ee20
Author: Zhilong Liu <zlliu@suse.com>
Date: Wed Dec 7 02:43:10 2016 -0500
add a new test package named lvm2-testsuite(bnc#950089)
+ lvm2-testsuite.patch
Currently this new package is not enabled by default.
Please set enable_testsuite to 1 to turn it on in the
spec file.
Eric:
This patch is to solve the following obs building error:
"""
E: arch-dependent-file-in-usr-share (Badness: 590) /usr/share/lvm2-testsuite/api/lvtest.t
...
"""
So, move the *.t binary into /usr/lib/lvm2-testsuite/.
Signed-off-by: Zhilong Liu <zlliu@suse.com>
diff --git a/test/Makefile.in b/test/Makefile.in
index f152868..f0845d7 100644
--- a/test/Makefile.in
+++ b/test/Makefile.in
@@ -224,7 +224,7 @@ install: .tests-stamp lib/paths-installed
$(INSTALL_DATA) api/*.sh $(DATADIR)/api
$(INSTALL_DATA) unit/*.sh $(DATADIR)/unit
$(INSTALL_DATA) lib/mke2fs.conf $(DATADIR)/lib
- $(INSTALL_PROGRAM) api/*.{t,py} $(DATADIR)/api
+ $(INSTALL_PROGRAM) api/*.py $(DATADIR)/api/
$(INSTALL_PROGRAM) unit/unit-test $(DATADIR)/unit
$(INSTALL_PROGRAM) dbus/*.py $(DATADIR)/dbus/
$(INSTALL_DATA) lib/paths-installed $(DATADIR)/lib/paths
@@ -244,6 +244,7 @@ install: .tests-stamp lib/paths-installed
@cd $(EXECDIR) && for i in $(LIB_LINK_NOT); do \
echo "$(LN_S) -f not $$i"; \
$(LN_S) -f not $$i; done
+ $(INSTALL_PROGRAM) api/*.t $(EXECDIR)
$(INSTALL_PROGRAM) -D lib/runner $(bindir)/lvm2-testsuite
lib/should: lib/not

View File

@ -1,92 +0,0 @@
From 4f0681b1a296d88ac1dbdb26e46afed3285ad1bf Mon Sep 17 00:00:00 2001
From: Eric Ren <zren@suse.com>
Date: Tue, 23 May 2017 15:09:46 +0800
Subject: [PATCH 09/10] clvmd: try to refresh device cache on the first failure
1. The original problem
$ sudo lvchange -ay testvg/testlv
Error locking on node 1302cf30: Volume group for uuid not found:
qBKu65bSxfRq7gUf91NZuH4epLza4ifDieQJFd2to2WruVi5Brn7DxxsEgi5Zodw
2. This problem can be easily replicated
a. Make clvmd running in cluster environment;
b. Assume you have created LV "testlv" in local VG 'testvg' on
a MD device 'md0';
c. Make sure 'md0' is stopped, and not in the device cache by
executing 'clvmd -R' or 'pvscan';
d. Assemble 'md0' by issuing 'mdadm --assemble --scan --name md0';
e. To activate 'testlv', you will see the 'Error locking' problem.
3. Analysis
a. After step 2.d, 'pvscan --cache ...' is triggered by udev rules,
notifying 'md0' is ready. But, pvscan exits very early because
lvmetad is not being used, thus doesn't go through the lock manager.
Therefore, clvmd isn't aware of this udev events. The device cache
hasn't 'md0'.
b. In step 2.e, the client, 'lvchange -ay testvg/testlv' cmd, can find
'testlv' correctly in the client metadata, because the device list
is gathered by call chain:
lvm_run_command()->init_filters()->persistent_filter_load()->dev_cache_scan().
Then, it asks clvmd for "Locking VG V_testvg CR", which just drops
the metadata in clmvd by call chain: do_lock_vg()->lvmcache_drop_metadata(),
but the device cache is *not* refreshed.
c. Finally, clvmd fails to find the lvid in activation path:
do_lock_lv()->do_activate_lv()->lv_info_by_lvid()
Apparently, the metadata DB is not complete without a complete device
cache in clvmd. However, upstream say the pvscan tool intends to be
only used with lvmetad, suggesting me not hacking there. So, we'd
better fix this issue within clvmd code.
Sometimes, the device cache in clvmd could be out of date.
"clvmd -R" is invented for this issue. However, to run
"clvmd -R" manually is not convenient, because it's hard
to predict when device change would happen.
This patch gives another try after refreshing the device
cache. In normal, it doesn't cause any side-effect. In
case of the issue above, it's worth a retry.
Signed-off-by: Eric Ren <zren@suse.com>
---
daemons/clvmd/lvm-functions.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/daemons/clvmd/lvm-functions.c b/daemons/clvmd/lvm-functions.c
index 2446fd1..dcd3f9b 100644
--- a/daemons/clvmd/lvm-functions.c
+++ b/daemons/clvmd/lvm-functions.c
@@ -509,11 +509,14 @@ const char *do_lock_query(char *resource)
int do_lock_lv(unsigned char command, unsigned char lock_flags, char *resource)
{
int status = 0;
+ int do_refresh = 0;
DEBUGLOG("do_lock_lv: resource '%s', cmd = %s, flags = %s, critical_section = %d\n",
resource, decode_locking_cmd(command), decode_flags(lock_flags), critical_section());
- if (!cmd->initialized.config || config_files_changed(cmd)) {
+again:
+ if (!cmd->initialized.config || config_files_changed(cmd)
+ || do_refresh) {
/* Reinitialise various settings inc. logging, filters */
if (do_refresh_cache()) {
log_error("Updated config file invalid. Aborting.");
@@ -579,6 +582,12 @@ int do_lock_lv(unsigned char command, unsigned char lock_flags, char *resource)
init_test(0);
pthread_mutex_unlock(&lvm_lock);
+ /* Try again in case device cache is stale */
+ if (status == EIO && !do_refresh) {
+ do_refresh = 1;
+ goto again;
+ }
+
DEBUGLOG("Command return is %d, critical_section is %d\n", status, critical_section());
return status;
}
--
2.10.2

406
lvm.conf
View File

@ -21,7 +21,6 @@
# N.B. Take care that each setting only appears once if uncommenting
# example settings in this file.
#hello
# Configuration section config.
# How LVM configuration settings are handled.
@ -124,7 +123,6 @@ devices {
# then the device is accepted. Be careful mixing 'a' and 'r' patterns,
# as the combination might produce unexpected results (test changes.)
# Run vgscan after changing the filter to regenerate the cache.
# See the use_lvmetad comment for a special case regarding filters.
#
# Example
# Accept every block device:
@ -140,36 +138,20 @@ devices {
#
# This configuration option has an automatic default value.
# filter = [ "a|.*/|" ]
filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|", "r|/dev/fd.*|", "r|/dev/cdrom|", "a/.*/" ]
# Below filter was used in SUSE/openSUSE before lvm2-2.03. It conflicts
# with lvm2-2.02.180+, so comment out in lvm2-2.03 release.
# filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|", "r|/dev/fd.*|", "r|/dev/cdrom|", "a/.*/" ]
# Configuration option devices/global_filter.
# Limit the block devices that are used by LVM system components.
# Because devices/filter may be overridden from the command line, it is
# not suitable for system-wide device filtering, e.g. udev and lvmetad.
# not suitable for system-wide device filtering, e.g. udev.
# Use global_filter to hide devices from these LVM system components.
# The syntax is the same as devices/filter. Devices rejected by
# global_filter are not opened by LVM.
# This configuration option has an automatic default value.
# global_filter = [ "a|.*/|" ]
# Configuration option devices/cache_dir.
# Directory in which to store the device cache file.
# The results of filtering are cached on disk to avoid rescanning dud
# devices (which can take a very long time). By default this cache is
# stored in a file named .cache. It is safe to delete this file; the
# tools regenerate it. If obtain_device_list_from_udev is enabled, the
# list of devices is obtained from udev and any existing .cache file
# is removed.
cache_dir = "/etc/lvm/cache"
# Configuration option devices/cache_file_prefix.
# A prefix used before the .cache file name. See devices/cache_dir.
cache_file_prefix = ""
# Configuration option devices/write_cache_state.
# Enable/disable writing the cache file. See devices/cache_dir.
write_cache_state = 1
# Configuration option devices/types.
# List of additional acceptable block device types.
# These are of device type names from /proc/devices, followed by the
@ -187,6 +169,10 @@ devices {
# present on the system. sysfs must be part of the kernel and mounted.)
sysfs_scan = 1
# Configuration option devices/scan_lvs.
# Scan LVM LVs for layered PVs.
scan_lvs = 0
# Configuration option devices/multipath_component_detection.
# Ignore devices that are components of DM multipath devices.
multipath_component_detection = 1
@ -270,14 +256,6 @@ devices {
# different way, making them a better choice for VG stacking.
ignore_lvm_mirrors = 1
# Configuration option devices/disable_after_error_count.
# Number of I/O errors after which a device is skipped.
# During each LVM operation, errors received from each device are
# counted. If the counter of a device exceeds the limit set here,
# no further I/O is sent to that device for the remainder of the
# operation. Setting this to 0 disables the counters altogether.
disable_after_error_count = 0
# Configuration option devices/require_restorefile_with_uuid.
# Allow use of pvcreate --uuid without requiring --restorefile.
require_restorefile_with_uuid = 1
@ -348,7 +326,7 @@ allocation {
maximise_cling = 1
# Configuration option allocation/use_blkid_wiping.
# Use blkid to detect existing signatures on new PVs and LVs.
# Use blkid to detect and erase existing signatures on new PVs and LVs.
# The blkid library can detect more signatures than the native LVM
# detection code, but may take longer. LVM needs to be compiled with
# blkid wiping support for this setting to apply. LVM native detection
@ -500,6 +478,154 @@ allocation {
# Default physical extent size in KiB to use for new VGs.
# This configuration option has an automatic default value.
# physical_extent_size = 4096
# Configuration option allocation/vdo_use_compression.
# Enables or disables compression when creating a VDO volume.
# Compression may be disabled if necessary to maximize performance
# or to speed processing of data that is unlikely to compress.
# This configuration option has an automatic default value.
# vdo_use_compression = 1
# Configuration option allocation/vdo_use_deduplication.
# Enables or disables deduplication when creating a VDO volume.
# Deduplication may be disabled in instances where data is not expected
# to have good deduplication rates but compression is still desired.
# This configuration option has an automatic default value.
# vdo_use_deduplication = 1
# Configuration option allocation/vdo_use_metadata_hints.
# Enables or disables whether VDO volume should tag its latency-critical
# writes with the REQ_SYNC flag. Some device mapper targets such as dm-raid5
# process writes with this flag at a higher priority.
# Default is enabled.
# This configuration option has an automatic default value.
# vdo_use_metadata_hints = 1
# Configuration option allocation/vdo_minimum_io_size.
# The minimum IO size for VDO volume to accept, in bytes.
# Valid values are 512 or 4096. The recommended and default value is 4096.
# This configuration option has an automatic default value.
# vdo_minimum_io_size = 4096
# Configuration option allocation/vdo_block_map_cache_size_mb.
# Specifies the amount of memory in MiB allocated for caching block map
# pages for VDO volume. The value must be a multiple of 4096 and must be
# at least 128MiB and less than 16TiB. The cache must be at least 16MiB
# per logical thread. Note that there is a memory overhead of 15%.
# This configuration option has an automatic default value.
# vdo_block_map_cache_size_mb = 128
# Configuration option allocation/vdo_block_map_period.
# The speed with which the block map cache writes out modified block map pages.
# A smaller era length is likely to reduce the amount time spent rebuilding,
# at the cost of increased block map writes during normal operation.
# The maximum and recommended value is 16380; the minimum value is 1.
# This configuration option has an automatic default value.
# vdo_block_map_period = 16380
# Configuration option allocation/vdo_check_point_frequency.
# The default check point frequency for VDO volume.
# This configuration option has an automatic default value.
# vdo_check_point_frequency = 0
# Configuration option allocation/vdo_use_sparse_index.
# Enables sparse indexing for VDO volume.
# This configuration option has an automatic default value.
# vdo_use_sparse_index = 0
# Configuration option allocation/vdo_index_memory_size_mb.
# Specifies the amount of index memory in MiB for VDO volume.
# The value must be at least 256MiB and at most 1TiB.
# This configuration option has an automatic default value.
# vdo_index_memory_size_mb = 256
# Configuration option allocation/vdo_slab_size_mb.
# Specifies the size in MiB of the increment by which a VDO is grown.
# Using a smaller size constrains the total maximum physical size
# that can be accommodated. Must be a power of two between 128MiB and 32GiB.
# This configuration option has an automatic default value.
# vdo_slab_size_mb = 2048
# Configuration option allocation/vdo_ack_threads.
# Specifies the number of threads to use for acknowledging
# completion of requested VDO I/O operations.
# The value must be at in range [0..100].
# This configuration option has an automatic default value.
# vdo_ack_threads = 1
# Configuration option allocation/vdo_bio_threads.
# Specifies the number of threads to use for submitting I/O
# operations to the storage device of VDO volume.
# The value must be in range [1..100]
# Each additional thread after the first will use an additional 18MiB of RAM,
# plus 1.12 MiB of RAM per megabyte of configured read cache size.
# This configuration option has an automatic default value.
# vdo_bio_threads = 1
# Configuration option allocation/vdo_bio_rotation.
# Specifies the number of I/O operations to enqueue for each bio-submission
# thread before directing work to the next. The value must be in range [1..1024].
# This configuration option has an automatic default value.
# vdo_bio_rotation = 64
# Configuration option allocation/vdo_cpu_threads.
# Specifies the number of threads to use for CPU-intensive work such as
# hashing or compression for VDO volume. The value must be in range [1..100]
# This configuration option has an automatic default value.
# vdo_cpu_threads = 2
# Configuration option allocation/vdo_hash_zone_threads.
# Specifies the number of threads across which to subdivide parts of the VDO
# processing based on the hash value computed from the block data.
# The value must be at in range [0..100].
# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
# either all zero or all non-zero.
# This configuration option has an automatic default value.
# vdo_hash_zone_threads = 1
# Configuration option allocation/vdo_logical_threads.
# Specifies the number of threads across which to subdivide parts of the VDO
# processing based on the hash value computed from the block data.
# A logical thread count of 9 or more will require explicitly specifying
# a sufficiently large block map cache size, as well.
# The value must be in range [0..100].
# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
# either all zero or all non-zero.
# This configuration option has an automatic default value.
# vdo_logical_threads = 1
# Configuration option allocation/vdo_physical_threads.
# Specifies the number of threads across which to subdivide parts of the VDO
# processing based on physical block addresses.
# Each additional thread after the first will use an additional 10MiB of RAM.
# The value must be in range [0..16].
# vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads must be
# either all zero or all non-zero.
# This configuration option has an automatic default value.
# vdo_physical_threads = 1
# Configuration option allocation/vdo_write_policy.
# Specifies the write policy:
# auto - VDO will check the storage device and determine whether it supports flushes.
# If it does, VDO will run in async mode, otherwise it will run in sync mode.
# sync - Writes are acknowledged only after data is stably written.
# This policy is not supported if the underlying storage is not also synchronous.
# async - Writes are acknowledged after data has been cached for writing to stable storage.
# Data which has not been flushed is not guaranteed to persist in this mode.
# This configuration option has an automatic default value.
# vdo_write_policy = "auto"
# Configuration option allocation/vdo_max_discard.
# Specified te maximum size of discard bio accepted, in 4096 byte blocks.
# I/O requests to a VDO volume are normally split into 4096-byte blocks,
# and processed up to 2048 at a time. However, discard requests to a VDO volume
# can be automatically split to a larger size, up to <max discard> 4096-byte blocks
# in a single bio, and are limited to 1500 at a time.
# Increasing this value may provide better overall performance, at the cost of
# increased latency for the individual discard requests.
# The default and minimum is 1. The maximum is UINT_MAX / 4096.
# This configuration option has an automatic default value.
# vdo_max_discard = 1
}
# Configuration section log.
@ -614,9 +740,9 @@ log {
# Select log messages by class.
# Some debugging messages are assigned to a class and only appear in
# debug output if the class is listed here. Classes currently
# available: memory, devices, activation, allocation, lvmetad,
# available: memory, devices, io, activation, allocation,
# metadata, cache, locking, lvmpolld. Use "all" to see everything.
debug_classes = [ "memory", "devices", "activation", "allocation", "lvmetad", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
debug_classes = [ "memory", "devices", "io", "activation", "allocation", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
}
# Configuration section backup.
@ -704,32 +830,6 @@ global {
# the error messages.
activation = 1
# Configuration option global/fallback_to_lvm1.
# Try running LVM1 tools if LVM cannot communicate with DM.
# This option only applies to 2.4 kernels and is provided to help
# switch between device-mapper kernels and LVM1 kernels. The LVM1
# tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1.
# They will stop working once the lvm2 on-disk metadata format is used.
# This configuration option has an automatic default value.
# fallback_to_lvm1 = 1
# Configuration option global/format.
# The default metadata format that commands should use.
# The -M 1|2 option overrides this setting.
#
# Accepted values:
# lvm1
# lvm2
#
# This configuration option has an automatic default value.
# format = "lvm2"
# Configuration option global/format_libraries.
# Shared libraries that process different metadata formats.
# If support for LVM1 metadata was compiled as a shared library use
# format_libraries = "liblvm2format1.so"
# This configuration option does not have a default value defined.
# Configuration option global/segment_libraries.
# This configuration option does not have a default value defined.
@ -742,57 +842,10 @@ global {
# Location of /etc system configuration directory.
etc = "/etc"
# Configuration option global/locking_type.
# Type of locking to use.
#
# Accepted values:
# 0
# Turns off locking. Warning: this risks metadata corruption if
# commands run concurrently.
# 1
# LVM uses local file-based locking, the standard mode.
# 2
# LVM uses the external shared library locking_library.
# 3
# LVM uses built-in clustered locking with clvmd.
# This is incompatible with lvmetad. If use_lvmetad is enabled,
# LVM prints a warning and disables lvmetad use.
# 4
# LVM uses read-only locking which forbids any operations that
# might change metadata.
# 5
# Offers dummy locking for tools that do not need any locks.
# You should not need to set this directly; the tools will select
# when to use it instead of the configured locking_type.
# Do not use lvmetad or the kernel device-mapper driver with this
# locking type. It is used by the --readonly option that offers
# read-only access to Volume Group metadata that cannot be locked
# safely because it belongs to an inaccessible domain and might be
# in use, for example a virtual machine image or a disk that is
# shared by a clustered machine.
#
locking_type = 1
# Configuration option global/wait_for_locks.
# When disabled, fail if a lock request would block.
wait_for_locks = 1
# Configuration option global/fallback_to_clustered_locking.
# Attempt to use built-in cluster locking if locking_type 2 fails.
# If using external locking (type 2) and initialisation fails, with
# this enabled, an attempt will be made to use the built-in clustered
# locking. Disable this if using a customised locking_library.
fallback_to_clustered_locking = 1
# Configuration option global/fallback_to_local_locking.
# Use locking_type 1 (local) if locking_type 2 or 3 fail.
# If an attempt to initialise type 2 or type 3 locking failed, perhaps
# because cluster components such as clvmd are not running, with this
# enabled, an attempt will be made to use local file-based locking
# (type 1). If this succeeds, only commands against local VGs will
# proceed. VGs marked as clustered will be ignored.
fallback_to_local_locking = 1
# Configuration option global/locking_dir.
# Directory to use for LVM command file locks.
# Local non-LV directory that holds file-based locks while commands are
@ -813,24 +866,12 @@ global {
# Search this directory first for shared libraries.
# This configuration option does not have a default value defined.
# Configuration option global/locking_library.
# The external locking library to use for locking_type 2.
# This configuration option has an automatic default value.
# locking_library = "liblvm2clusterlock.so"
# Configuration option global/abort_on_internal_errors.
# Abort a command that encounters an internal error.
# Treat any internal errors as fatal errors, aborting the process that
# encountered the internal error. Please only enable for debugging.
abort_on_internal_errors = 0
# Configuration option global/detect_internal_vg_cache_corruption.
# Internal verification of VG structures.
# Check if CRC matches when a parsed VG is used multiple times. This
# is useful to catch unexpected changes to cached VG structures.
# Please only enable for debugging.
detect_internal_vg_cache_corruption = 0
# Configuration option global/metadata_read_only.
# No operations that change on-disk metadata are permitted.
# Additionally, read-only commands that encounter metadata in need of
@ -865,6 +906,17 @@ global {
#
mirror_segtype_default = "raid1"
# Configuration option global/support_mirrored_mirror_log.
# Configuration option global/support_mirrored_mirror_log.
# Enable mirrored 'mirror' log type for testing.
#
# This type is deprecated to create or convert to but can
# be enabled to test that activation of existing mirrored
# logs and conversion to disk/core works.
#
# Not supported for regular operation!
support_mirrored_mirror_log = 0
# Configuration option global/raid10_segtype_default.
# The segment type used by the -i -m combination.
# The --type raid10|mirror option overrides this setting.
@ -913,41 +965,20 @@ global {
# This configuration option has an automatic default value.
# lvdisplay_shows_full_device_path = 0
# Configuration option global/use_lvmetad.
# Use lvmetad to cache metadata and reduce disk scanning.
# When enabled (and running), lvmetad provides LVM commands with VG
# metadata and PV state. LVM commands then avoid reading this
# information from disks which can be slow. When disabled (or not
# running), LVM commands fall back to scanning disks to obtain VG
# metadata. lvmetad is kept updated via udev rules which must be set
# up for LVM to work correctly. (The udev rules should be installed
# by default.) Without a proper udev setup, changes in the system's
# block device configuration will be unknown to LVM, and ignored
# until a manual 'pvscan --cache' is run. If lvmetad was running
# while use_lvmetad was disabled, it must be stopped, use_lvmetad
# enabled, and then started. When using lvmetad, LV activation is
# switched to an automatic, event-based mode. In this mode, LVs are
# activated based on incoming udev events that inform lvmetad when
# PVs appear on the system. When a VG is complete (all PVs present),
# it is auto-activated. The auto_activation_volume_list setting
# controls which LVs are auto-activated (all by default.)
# When lvmetad is updated (automatically by udev events, or directly
# by pvscan --cache), devices/filter is ignored and all devices are
# scanned by default. lvmetad always keeps unfiltered information
# which is provided to LVM commands. Each LVM command then filters
# based on devices/filter. This does not apply to other, non-regexp,
# filtering settings: component filters such as multipath and MD
# are checked during pvscan --cache. To filter a device and prevent
# scanning from the LVM system entirely, including lvmetad, use
# devices/global_filter.
use_lvmetad = 1
# Configuration option global/event_activation.
# Activate LVs based on system-generated device events.
# When a device appears on the system, a system-generated event runs
# the pvscan command to activate LVs if the new PV completes the VG.
# Use auto_activation_volume_list to select which LVs should be
# activated from these events (the default is all.)
# When event_activation is disabled, the system will generally run
# a direct activation command to activate LVs in complete VGs.
event_activation = 1
# Configuration option global/lvmetad_update_wait_time.
# Number of seconds a command will wait for lvmetad update to finish.
# After waiting for this period, a command will not use lvmetad, and
# will revert to disk scanning.
# Configuration option global/use_aio.
# Use async I/O when reading and writing devices.
# This configuration option has an automatic default value.
# lvmetad_update_wait_time = 10
# use_aio = 1
# Configuration option global/use_lvmlockd.
# Use lvmlockd for locking among hosts using LVM on shared storage.
@ -1073,6 +1104,17 @@ global {
# This configuration option has an automatic default value.
# cache_repair_options = [ "" ]
# Configuration option global/vdo_format_executable.
# The full path to the vdoformat command.
# LVM uses this command to initial data volume for VDO type logical volume
# This configuration option has an automatic default value.
# vdo_format_executable = "/usr/bin/vdoformat"
# Configuration option global/vdo_format_options.
# List of options passed added to standard vdoformat command.
# This configuration option has an automatic default value.
# vdo_format_options = [ "" ]
# Configuration option global/fsadm_executable.
# The full path to the fsadm command.
# LVM uses this command to help with lvresize -r operations.
@ -1446,6 +1488,33 @@ activation {
#
thin_pool_autoextend_percent = 20
# Configuration option activation/vdo_pool_autoextend_threshold.
# Auto-extend a VDO pool when its usage exceeds this percent.
# Setting this to 100 disables automatic extension.
# The minimum value is 50 (a smaller value is treated as 50.)
# Also see vdo_pool_autoextend_percent.
# Automatic extension requires dmeventd to be monitoring the LV.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 10G
# VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
# 8.4G, it is extended to 14.4G:
# vdo_pool_autoextend_threshold = 70
#
vdo_pool_autoextend_threshold = 100
# Configuration option activation/vdo_pool_autoextend_percent.
# Auto-extending a VDO pool adds this percent extra space.
# The amount of additional space added to a VDO pool is this
# percent of its current size.
#
# Example
# Using 70% autoextend threshold and 20% autoextend size, when a 10G
# VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
# 8.4G, it is extended to 14.4G:
# This configuration option has an automatic default value.
# vdo_pool_autoextend_percent = 20
# Configuration option activation/mlock_filter.
# Do not mlock these memory areas.
# While activating devices, I/O to devices being (re)configured is
@ -1612,24 +1681,6 @@ activation {
# This configuration option is advanced.
# This configuration option has an automatic default value.
# stripesize = 64
# Configuration option metadata/dirs.
# Directories holding live copies of text format metadata.
# These directories must not be on logical volumes!
# It's possible to use LVM with a couple of directories here,
# preferably on different (non-LV) filesystems, and with no other
# on-disk metadata (pvmetadatacopies = 0). Or this can be in addition
# to on-disk metadata areas. The feature was originally added to
# simplify testing and is not supported under low memory situations -
# the machine could lock up. Never edit any files in these directories
# by hand unless you are absolutely sure you know what you are doing!
# Use the supplied toolset to make changes (e.g. vgcfgrestore).
#
# Example
# dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
#
# This configuration option is advanced.
# This configuration option does not have a default value defined.
# }
# Configuration section report.
@ -2080,6 +2131,23 @@ dmeventd {
# This configuration option has an automatic default value.
# thin_command = "lvm lvextend --use-policies"
# Configuration option dmeventd/vdo_library.
# The library dmeventd uses when monitoring a VDO pool device.
# libdevmapper-event-lvm2vdo.so monitors the filling of a pool
# and emits a warning through syslog when the usage exceeds 80%. The
# warning is repeated when 85%, 90% and 95% of the pool is filled.
# This configuration option has an automatic default value.
# vdo_library = "libdevmapper-event-lvm2vdo.so"
# Configuration option dmeventd/vdo_command.
# The plugin runs command with each 5% increment when VDO pool volume
# gets above 50%.
# Command which starts with 'lvm ' prefix is internal lvm command.
# You can write your own handler to customise behaviour in more details.
# User handler is specified with the full path starting with '/'.
# This configuration option has an automatic default value.
# vdo_command = "lvm lvextend --use-policies"
# Configuration option dmeventd/executable.
# The full path to the dmeventd binary.
# This configuration option has an automatic default value.

View File

@ -1,13 +1,34 @@
-------------------------------------------------------------------
Wed Aug 21 10:10:30 UTC 2019 - ghe@suse.com
Mon Sep 02 11:21:03 UTC 2019 - heming.zhao@suse.com
- MD devices should be detected by LVM2 with metadata=1.0/0.9 (bsc#1145231)
+ bug-1145231_lvmetad-improve-scan-for-pvscan-all.patch
+ bug-1145231_scan-use-full-md-filter-when-md-1.0-devices-are-pres.patch
+ bug-1145231_scan-enable-full-md-filter-when-md-1.0-devices-are-p.patch
+ bug-1145231_scan-md-metadata-version-0.90-is-at-the-end-of-disk.patch
+ bug-1145231_pvscan-lvmetad-use-full-md-filter-when-md-1.0-device.patch
+ bug-1145231_pvscan-lvmetad-use-udev-info-to-improve-md-component.patch
- Update to LVM2.2.03.05
- To drop lvm2-clvm and lvm2-cmirrord rpms (jsc#PM-1324)
- Fix Out of date package (bsc#1111734)
- Fix occasional slow shutdowns with kernel 5.0.0 and up (bsc#1137648)
- Remove clvmd
- Remove lvmlib (api)
- Remove lvmetad
- Drop patches that have been merged into upstream
- bug-1114113_metadata-prevent-writing-beyond-metadata-area.patch
- bug-1137296_pvremove-vgextend-fix-using-device-aliases-with-lvmetad.patch
- bug-1135984_cache-support-no_discard_passdown.patch
- Drop patches that have been nonexist/unsupport in upstream
- bsc1080299-detect-clvm-properly.patch
- bug-998893_make_pvscan_service_after_multipathd.patch
- bug-978055_clvmd-try-to-refresh-device-cache-on-the-first-failu.patch
- bug-950089_test-fix-lvm2-testsuite-build-error.patch
- bug-1072624_test-lvmetad_dump-always-timed-out-when-using-nc.patch
- tests-specify-python3-as-the-script-interpreter.patch
- Update spec files
- merge device-mapper, lvm2-lockd, lvm2 into one spec file
- clvmd/lvmlib (api)/lvmetad had been removed, so delete related context in spec file
- Update lvm.conf files
- remove all lvmetad lines/keywords
- add event_activation
- remove fallback_to_lvm1 & related items
- remove locking_type/fallback_to_clustered_locking/fallback_to_local_locking items
- remove locking_library item
- remove all special filter rules
-------------------------------------------------------------------
Tue Jul 9 10:00:05 UTC 2019 - ghe@suse.com

View File

@ -32,13 +32,13 @@
%global flavor @BUILD_FLAVOR@%{nil}
%define psuffix %{nil}
%if "%{flavor}" == "devicemapper"
%define psuffix -devicemapper
%define psuffix -device-mapper
%bcond_without devicemapper
%else
%bcond_with devicemapper
%endif
%if "%{flavor}" == "lockd"
%define psuffix -clustering
%define psuffix -lvmlockd
%bcond_without lockd
%else
%bcond_with lockd
@ -54,7 +54,7 @@ Source: ftp://sources.redhat.com/pub/lvm2/LVM2.%{version}.tgz
Source1: lvm.conf
Source42: ftp://sources.redhat.com/pub/lvm2/LVM2.%{version}.tgz.asc
# Upstream patches
#Patch0001: bug-1122666_devices-drop-open-error-message.patch
Patch0001: bug-1122666_devices-drop-open-error-message.patch
# SUSE patches: 1000+ for LVM
# Never upstream
Patch1001: cmirrord_remove_date_time_from_compilation.patch
@ -67,6 +67,12 @@ Patch2001: bug-1012973_simplify-special-case-for-md-in-69-dm-lvm-metadata.p
Patch3001: bug-1043040_test-fix-read-ahead-issues-in-test-scripts.patch
# patches specif for lvm2.spec
Patch4001: bug-1037309_Makefile-skip-compliling-daemons-lvmlockd-directory.patch
# To detect modprobe during build
BuildRequires: kmod-compat
BuildRequires: libaio-devel
BuildRequires: pkgconfig
BuildRequires: thin-provisioning-tools >= %{thin_provisioning_version}
BuildRequires: pkgconfig(libudev)
Requires: device-mapper >= %{device_mapper_version}
Requires: modutils
Requires(post): coreutils
@ -76,41 +82,22 @@ Obsoletes: lvm2-cmirrord
%{?systemd_requires}
%if %{with devicemapper}
BuildRequires: gcc-c++
BuildRequires: kmod-compat
BuildRequires: libaio-devel
BuildRequires: pkgconfig
BuildRequires: suse-module-tools
BuildRequires: thin-provisioning-tools >= %{thin_provisioning_version}
BuildRequires: pkgconfig(libselinux)
BuildRequires: pkgconfig(libsepol)
BuildRequires: pkgconfig(libudev)
BuildRequires: pkgconfig(systemd)
%else
%if %{with lockd}
# To detect modprobe during build
BuildRequires: kmod-compat
BuildRequires: libaio-devel
BuildRequires: libcorosync-devel
BuildRequires: libdlm-devel
BuildRequires: pkgconfig
BuildRequires: thin-provisioning-tools >= %{thin_provisioning_version}
BuildRequires: pkgconfig(blkid)
BuildRequires: pkgconfig(libudev)
%if %{with lockd}
BuildRequires: libdlm-devel
%if 0%{_supportsanlock} == 1
BuildRequires: sanlock-devel >= %{sanlock_version}
%endif
%else
BuildRequires: gcc-c++
# To detect modprobe during build
BuildRequires: kmod-compat
BuildRequires: libaio-devel
BuildRequires: libcorosync-devel
BuildRequires: libselinux-devel
BuildRequires: pkgconfig
BuildRequires: readline-devel
BuildRequires: thin-provisioning-tools >= %{thin_provisioning_version}
BuildRequires: pkgconfig(blkid)
BuildRequires: pkgconfig(libudev)
BuildRequires: pkgconfig(systemd)
BuildRequires: pkgconfig(udev)
%endif
@ -122,7 +109,7 @@ Volume Manager.
%prep
%setup -q -n LVM2.%{version}
#%patch0001 -p1
%patch0001 -p1
%patch1001 -p1
%patch1002 -p1
%patch1003 -p1
@ -138,7 +125,6 @@ Volume Manager.
%if !%{with devicemapper} && !%{with lockd}
extra_opts="
--enable-blkid_wiping
--enable-cmdlib
--enable-lvmpolld
--enable-realtime
--with-cache=internal
@ -156,7 +142,6 @@ extra_opts="
%if %{with lockd}
extra_opts="
--enable-blkid_wiping
--enable-cmdlib
--enable-lvmpolld
--enable-realtime
--with-default-locking-dir=/run/lock/lvm
@ -682,6 +667,8 @@ LVM commands use lvmlockd to coordinate access to shared storage.
Summary: LVM2 command line library
Group: System/Libraries
Conflicts: %{name} < %{version}
Obsoletes: liblvm2app2_2
Obsoletes: liblvm2cmd2_02
%description -n %{cmdlib}
The lvm2 command line library allows building programs that manage

View File

@ -1,27 +0,0 @@
From 3f768d29ceb5427b6e8de4fe35e2c1001409d750 Mon Sep 17 00:00:00 2001
From: Gang He <ghe@suse.com>
Date: Wed, 20 Jun 2018 14:04:52 +0800
Subject: [PATCH] tests: specify python3 as the script interpreter
specify /usr/bin/python3 as the script interpreter in
python_lvm_unit.py.in file, otherwise, there will be a building
error in OBS.
Signed-off-by: Gang He <ghe@suse.com>
---
test/api/python_lvm_unit.py.in | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/test/api/python_lvm_unit.py.in b/test/api/python_lvm_unit.py.in
index 78ced7e31..c6a7c9905 100755
--- a/test/api/python_lvm_unit.py.in
+++ b/test/api/python_lvm_unit.py.in
@@ -1,4 +1,4 @@
-#!@PYTHON@
+#!/usr/bin/python3
# Copyright (C) 2012-2013 Red Hat, Inc. All rights reserved.
#
--
2.12.3