lvm2/0004-lib-locking-Parse-PV-list-for-IDM-locking.patch
Gang He d0810cf04c Accepting request 900342 from home:hmzhao:branches:openSUSE:Factory
- update lvm2 from LVM2.03.10 to LVM2.2.03.12 (bsc#1187010)
  *** WHATS_NEW from 2.03.11 to 2.03.12 ***
  Version 2.03.12 - 07th May 2021
  ===============================
    Allow attaching cache to thin data volume.
    Fix memleak when generating list of outdated pvs.
    Better hyphenation usage in man pages.
    Replace use of deprecated security_context_t with char*.
    Configure supports AIO_LIBS and AIO_CFLAGS.
    Improve build process for static builds.
    New --setautoactivation option to modify LV or VG auto activation.
    New metadata based autoactivation property for LVs and VGs.
    Improve signal handling with lvmpolld.
    Signal handler can interrupt command also for SIGTERM.
    Lvreduce --yes support.
    Add configure option --with/out-symvers for non-glibc builds.
    Report error when the filesystem is missing on fsadm resized volume.
    Handle better blockdev with --getsize64 support for fsadm.
    Do not include editline/history.h when using editline library.
    Support error and zero segtype for thin-pool data for testing.
    Support mixed extension for striped, error and zero segtypes.
    Support resize also for stacked virtual volumes.
    Skip dm-zero devices just like with dm-error target.
    Reduce ioctl() calls when checking target status.
    Merge polling does not fail, when LV is found to be already merged.
    Poll volumes with at least 100ms delays.
    Do not flush dm cache when cached LV is going to be removed.
    New lvmlockctl_kill_command configuration option.
    Support interruption while waiting on device close before deactivation.
    Flush thin-pool messages before removing more thin volumes.
    Improve hash function with less collisions and make it faster.
    Reduce ioctl count when deactivating volumes.
    Reduce number of metadata parsing.
    Enhance performance of lvremove and vgremove commands.
    Support interruption when taking archive and backup.
    Accelerate large lvremoves.
    Speedup search for cached device nodes.
    Speedup command initialization.
    Add devices file feature, off by default for now.
    Support extension of writecached volumes.
    Fix problem with unbound variable usage within fsadm.
    Fix IMSM MD RAID detection on 4k devices.
    Check for presence of VDO target before starting any conversion.
    Support metatadata profiles with volume VDO pool conversions.
    Support -Zn for conversion of already formated VDO pools.
    Avoid removing LVs on error path of lvconvert during creation volumes.
    Fix crashing lvdisplay when thin volume was waiting for merge.
    Support option --errorwhenfull when converting volume to thin-pool.
    Improve thin-performance profile support conversion to thin-pool.
    Add workaround to avoid read of internal 'converted' devices.
    Prohibit merging snapshot into the read-only thick snapshot origin.
    Restore support for flipping rw/r permissions for thin snapshot origin.
    Support resize of cached volumes.
    Disable autoactivation with global/event_activation=0.
    Check if lvcreate passes read_only_volume_list with tags and skips zeroing.
    Allocation prints better error when metadata cannot fit on a single PV.
    Pvmove can better resolve full thin-pool tree move.
    Limit pool metadata spare to 16GiB.
    Improves conversion and allocation of pool metadata.
    Support thin pool metadata 15.88GiB, adds 64MiB, thin_pool_crop_metadata=0.
    Enhance lvdisplay to report raid available/partial.
    Support online rename of VDO pools.
    Improve removal of pmspare when last pool is removed.
    Fix problem with wiping of converted LVs.
    Fix memleak in scanning  (2.03.11).
    Fix corner case allocation for thin-pools.
  
  Version 2.03.11 - 08th January 2021
  ===================================
    Fix pvck handling MDA at offset different from 4096.
    Partial or degraded activation of writecache is not allowed.
    Enhance error handling for fsadm and handle correct fsck result.
    Dmeventd lvm plugin ignores higher reserved_stack lvm.conf values.
    Support using BLKZEROOUT for clearing devices.
    Support interruption when wipping LVs.
    Support interruption for bcache waiting.
    Fix bcache when device has too many failing writes.
    Fix bcache waiting for IO completion with failing disks.
    Configure use own python path name order to prefer using python3.
    Add configure --enable-editline support as an alternative to readline.
    Enhance reporting and error handling when creating thin volumes.
    Enable vgsplit for VDO volumes.
    Lvextend of vdo pool volumes ensure at least 1 new VDO slab is added.
    Use revert_lv() on reload error path after vg_revert().
    Configure --with-integrity enabled.
    Restore lost signal blocking while VG lock is held.
    Improve estimation of needed extents when creating thin-pool.
    Use extra 1% when resizing thin-pool metadata LV with --use-policy.
    Enhance --use-policy percentage rounding.
    Configure --with-vdo and --with-writecache as internal segments.
    Improving VDO man page examples.
    Allow pvmove of writecache origin.
    Report integrity fields.
    Integrity volumes defaults to journal mode.
    Switch code base to use flexible array syntax.
    Fix 64bit math when calculation cachevol size.
    Preserve uint32_t for seqno handling.
    Switch from mmap to plain read when loading regular files.
    Update lvmvdo man page and better explain DISCARD usage.
  *** WHATS_NEW_DM from 1.02.175 to 1.02.177 ***
  Version 1.02.177 - 07th May 2021
  ================================
    Configure proceeds without libaio to allow build of device-mapper only.
    Fix symbol versioning build with -O2 -flto.
    Add dm_tree_node_add_thin_pool_target_v1 with crop_metadata support.
- Drop patches that have been merged into upstream
  - bug-1175565_01-tools-move-struct-element-before-variable-lenght-lis.patch
  - bug-1175565_02-gcc-change-zero-sized-array-to-fexlible-array.patch
  - bug-1175565_03-gcc-zero-sized-array-to-fexlible-array-C99.patch
  - bug-1178680_add-metadata-based-autoactivation-property-for-VG-an.patch
  - bug-1185190_01-pvscan-support-disabled-event_activation.patch
  - bug-1185190_02-config-improve-description-for-event_activation.patch
- Add patch
  + 0001-lvmlockd-idm-Introduce-new-locking-scheme.patch
  + 0002-lvmlockd-idm-Hook-Seagate-IDM-wrapper-APIs.patch
  + 0003-lib-locking-Add-new-type-idm.patch
  + 0004-lib-locking-Parse-PV-list-for-IDM-locking.patch
  + 0005-tools-Add-support-for-idm-lock-type.patch
  + 0006-configure-Add-macro-LOCKDIDM_SUPPORT.patch
  + 0007-enable-command-syntax-for-thin-and-writecache.patch
  + 0008-lvremove-fix-removing-thin-pool-with-writecache-on-d.patch
  + 0009-vdo-fix-preload-of-kvdo.patch
  + 0010-writecache-fix-lv_on_pmem.patch
  + 0011-writecache-don-t-pvmove-device-used-by-writecache.patch
  + 0012-pvchange-fix-file-locking-deadlock.patch
  + 0013-tests-Enable-the-testing-for-IDM-locking-scheme.patch
  + 0014-tests-Support-multiple-backing-devices.patch
  + 0015-tests-Cleanup-idm-context-when-prepare-devices.patch
  + 0016-tests-Add-checking-for-lvmlockd-log.patch
  + 0017-tests-stress-Add-single-thread-stress-testing.patch
  + 0018-tests-stress-Add-multi-threads-stress-testing-for-VG.patch
  + 0019-tests-stress-Add-multi-threads-stress-testing-for-PV.patch
  + 0020-tests-Support-idm-failure-injection.patch
  + 0021-tests-Add-testing-for-lvmlockd-failure.patch
  + 0022-tests-idm-Add-testing-for-the-fabric-failure.patch
  + 0023-tests-idm-Add-testing-for-the-fabric-failure-and-tim.patch
  + 0024-tests-idm-Add-testing-for-the-fabric-s-half-brain-fa.patch
  + 0025-tests-idm-Add-testing-for-IDM-lock-manager-failure.patch
  + 0026-tests-multi-hosts-Add-VG-testing.patch
  + 0027-tests-multi-hosts-Add-LV-testing.patch
  + 0028-tests-multi-hosts-Test-lease-timeout-with-LV-exclusi.patch
  + 0029-tests-multi-hosts-Test-lease-timeout-with-LV-shareab.patch
  + 0030-fix-empty-mem-pool-leak.patch
  + 0031-tests-writecache-blocksize-add-dm-cache-tests.patch
  + 0032-tests-rename-test.patch
  + 0033-tests-add-writecache-cache-blocksize-2.patch
  + 0034-lvmlockd-Fix-the-compilation-warning.patch
  + 0035-devices-don-t-use-deleted-loop-backing-file-for-devi.patch
  + 0036-man-help-fix-common-option-listing.patch
  + 0037-archiving-take-archive-automatically.patch
  + 0038-backup-automatically-store-data-on-vg_unlock.patch
  + 0039-archive-avoid-abuse-of-internal-flag.patch
  + 0040-pvck-add-lock_global-before-clean_hint_file.patch
  + 0041-lvmdevices-add-deviceidtype-option.patch
- Update patch
  - bug-1184687_Add-nolvm-for-kernel-cmdline.patch
  - fate-31841_fsadm-add-support-for-btrfs.patch
- lvm.conf
  - trim tail space
  - fix typo
  - [new item] devices/use_devicesfile
  - [new item] devices/devicesfile
  - [new item] devices/search_for_devnames
  - [new item] allocation/thin_pool_crop_metadata
  - [new item] global/lvmlockctl_kill_command
  - [new item] global/vdo_disabled_features

OBS-URL: https://build.opensuse.org/request/show/900342
OBS-URL: https://build.opensuse.org/package/show/Base:System/lvm2?expand=0&rev=300
2021-06-16 09:38:53 +00:00

413 lines
12 KiB
Diff

From affe1af148d5d939ffad7bde2ad51b0f386a44b7 Mon Sep 17 00:00:00 2001
From: Leo Yan <leo.yan@linaro.org>
Date: Fri, 7 May 2021 10:25:15 +0800
Subject: [PATCH 04/33] lib: locking: Parse PV list for IDM locking
For shared VG or LV locking, IDM locking scheme needs to use the PV
list assocated with VG or LV for sending SCSI commands, thus it requires
to use some places to generate PV list.
In reviewing the flow for LVM commands, the best place to generate PV
list is in the locking lib. So this is why this patch parses PV list as
shown. It iterates over all the PV nodes one by one, and compare with
the VG name or LV prefix string. If any PV matches, then the PV is
added into the PV list. Finally the PV list is sent to lvmlockd daemon.
Here as mentioned, it compares LV prefix string with the format
"lv_name_", the reason is it needs to find out all relevant PVs, e.g.
for the thin pool, it has LVs for metadata, pool, error, and raw LV, so
we can use the prefix string to find out all PVs belonging to the thin
pool.
For the global lock, it's not covered in this patch. To avoid the egg
and chicken issue, we need to prepare the global lock ahead before any
locking can be used. So the global lock's PV list is established in
lvmlockd daemon by iterating all drives with partition labeled with
"propeller".
Signed-off-by: Leo Yan <leo.yan@linaro.org>
Signed-off-by: Heming Zhao <heming.zhao@suse.com>
---
lib/locking/lvmlockd.c | 258 +++++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 241 insertions(+), 17 deletions(-)
diff --git a/lib/locking/lvmlockd.c b/lib/locking/lvmlockd.c
index 040c4246d718..766be71badf3 100644
--- a/lib/locking/lvmlockd.c
+++ b/lib/locking/lvmlockd.c
@@ -25,6 +25,11 @@ static int _use_lvmlockd = 0; /* is 1 if command is configured to use lv
static int _lvmlockd_connected = 0; /* is 1 if command is connected to lvmlockd */
static int _lvmlockd_init_failed = 0; /* used to suppress further warnings */
+struct lvmlockd_pvs {
+ char **path;
+ int num;
+};
+
void lvmlockd_set_socket(const char *sock)
{
_lvmlockd_socket = sock;
@@ -178,18 +183,34 @@ static int _lockd_result(daemon_reply reply, int *result, uint32_t *lockd_flags)
return 1;
}
-static daemon_reply _lockd_send(const char *req_name, ...)
+static daemon_reply _lockd_send_with_pvs(const char *req_name,
+ const struct lvmlockd_pvs *lock_pvs, ...)
{
- va_list ap;
daemon_reply repl;
daemon_request req;
+ int i;
+ char key[32];
+ const char *val;
+ va_list ap;
req = daemon_request_make(req_name);
- va_start(ap, req_name);
+ va_start(ap, lock_pvs);
daemon_request_extend_v(req, ap);
va_end(ap);
+ /* Pass PV list */
+ if (lock_pvs && lock_pvs->num) {
+ daemon_request_extend(req, "path_num = " FMTd64,
+ (int64_t)(lock_pvs)->num, NULL);
+
+ for (i = 0; i < lock_pvs->num; i++) {
+ snprintf(key, sizeof(key), "path[%d] = %%s", i);
+ val = lock_pvs->path[i] ? lock_pvs->path[i] : "none";
+ daemon_request_extend(req, key, val, NULL);
+ }
+ }
+
repl = daemon_send(_lvmlockd, req);
daemon_request_destroy(req);
@@ -197,6 +218,166 @@ static daemon_reply _lockd_send(const char *req_name, ...)
return repl;
}
+#define _lockd_send(req_name, args...) \
+ _lockd_send_with_pvs(req_name, NULL, ##args)
+
+static int _lockd_retrive_vg_pv_num(struct volume_group *vg)
+{
+ struct pv_list *pvl;
+ int num = 0;
+
+ dm_list_iterate_items(pvl, &vg->pvs)
+ num++;
+
+ return num;
+}
+
+static void _lockd_retrive_vg_pv_list(struct volume_group *vg,
+ struct lvmlockd_pvs *lock_pvs)
+{
+ struct pv_list *pvl;
+ int pv_num, i;
+
+ memset(lock_pvs, 0x0, sizeof(*lock_pvs));
+
+ pv_num = _lockd_retrive_vg_pv_num(vg);
+ if (!pv_num) {
+ log_error("Fail to any PVs for VG %s", vg->name);
+ return;
+ }
+
+ /* Allocate buffer for PV list */
+ lock_pvs->path = zalloc(sizeof(*lock_pvs->path) * pv_num);
+ if (!lock_pvs->path) {
+ log_error("Fail to allocate PV list for VG %s", vg->name);
+ return;
+ }
+
+ i = 0;
+ dm_list_iterate_items(pvl, &vg->pvs) {
+ lock_pvs->path[i] = strdup(pv_dev_name(pvl->pv));
+ if (!lock_pvs->path[i]) {
+ log_error("Fail to allocate PV path for VG %s", vg->name);
+ goto fail;
+ }
+
+ log_debug("VG %s find PV device %s", vg->name, lock_pvs->path[i]);
+ i++;
+ }
+
+ lock_pvs->num = pv_num;
+ return;
+
+fail:
+ for (i = 0; i < pv_num; i++) {
+ if (!lock_pvs->path[i])
+ continue;
+ free(lock_pvs->path[i]);
+ }
+ free(lock_pvs->path);
+ return;
+}
+
+static int _lockd_retrive_lv_pv_num(struct volume_group *vg,
+ const char *lv_name)
+{
+ struct logical_volume *lv = find_lv(vg, lv_name);
+ struct pv_list *pvl;
+ int num;
+
+ if (!lv)
+ return 0;
+
+ num = 0;
+ dm_list_iterate_items(pvl, &vg->pvs) {
+ if (lv_is_on_pv(lv, pvl->pv))
+ num++;
+ }
+
+ return num;
+}
+
+static void _lockd_retrive_lv_pv_list(struct volume_group *vg,
+ const char *lv_name,
+ struct lvmlockd_pvs *lock_pvs)
+{
+ struct logical_volume *lv = find_lv(vg, lv_name);
+ struct pv_list *pvl;
+ int pv_num, i = 0;
+
+ memset(lock_pvs, 0x0, sizeof(*lock_pvs));
+
+ /* Cannot find any existed LV? */
+ if (!lv)
+ return;
+
+ pv_num = _lockd_retrive_lv_pv_num(vg, lv_name);
+ if (!pv_num) {
+ /*
+ * Fixup for 'lvcreate --type error -L1 -n $lv1 $vg', in this
+ * case, the drive path list is empty since it doesn't establish
+ * the structure 'pvseg->lvseg->lv->name'.
+ *
+ * So create drive path list with all drives in the VG.
+ */
+ log_error("Fail to find any PVs for %s/%s, try to find PVs from VG instead",
+ vg->name, lv_name);
+ _lockd_retrive_vg_pv_list(vg, lock_pvs);
+ return;
+ }
+
+ /* Allocate buffer for PV list */
+ lock_pvs->path = malloc(sizeof(*lock_pvs->path) * pv_num);
+ if (!lock_pvs->path) {
+ log_error("Fail to allocate PV list for %s/%s", vg->name, lv_name);
+ return;
+ }
+
+ dm_list_iterate_items(pvl, &vg->pvs) {
+ if (lv_is_on_pv(lv, pvl->pv)) {
+ lock_pvs->path[i] = strdup(pv_dev_name(pvl->pv));
+ if (!lock_pvs->path[i]) {
+ log_error("Fail to allocate PV path for LV %s/%s",
+ vg->name, lv_name);
+ goto fail;
+ }
+
+ log_debug("Find PV device %s for LV %s/%s",
+ lock_pvs->path[i], vg->name, lv_name);
+ i++;
+ }
+ }
+
+ lock_pvs->num = pv_num;
+ return;
+
+fail:
+ for (i = 0; i < pv_num; i++) {
+ if (!lock_pvs->path[i])
+ continue;
+ free(lock_pvs->path[i]);
+ lock_pvs->path[i] = NULL;
+ }
+ free(lock_pvs->path);
+ lock_pvs->path = NULL;
+ lock_pvs->num = 0;
+ return;
+}
+
+static void _lockd_free_pv_list(struct lvmlockd_pvs *lock_pvs)
+{
+ int i;
+
+ for (i = 0; i < lock_pvs->num; i++) {
+ free(lock_pvs->path[i]);
+ lock_pvs->path[i] = NULL;
+ }
+
+ free(lock_pvs->path);
+ lock_pvs->path = NULL;
+ lock_pvs->num = 0;
+}
+
/*
* result/lockd_flags are values returned from lvmlockd.
*
@@ -227,6 +408,7 @@ static int _lockd_request(struct cmd_context *cmd,
const char *lv_lock_args,
const char *mode,
const char *opts,
+ const struct lvmlockd_pvs *lock_pvs,
int *result,
uint32_t *lockd_flags)
{
@@ -251,7 +433,8 @@ static int _lockd_request(struct cmd_context *cmd,
cmd_name = "none";
if (vg_name && lv_name) {
- reply = _lockd_send(req_name,
+ reply = _lockd_send_with_pvs(req_name,
+ lock_pvs,
"cmd = %s", cmd_name,
"pid = " FMTd64, (int64_t) pid,
"mode = %s", mode,
@@ -271,7 +454,8 @@ static int _lockd_request(struct cmd_context *cmd,
req_name, mode, vg_name, lv_name, *result, *lockd_flags);
} else if (vg_name) {
- reply = _lockd_send(req_name,
+ reply = _lockd_send_with_pvs(req_name,
+ lock_pvs,
"cmd = %s", cmd_name,
"pid = " FMTd64, (int64_t) pid,
"mode = %s", mode,
@@ -288,7 +472,8 @@ static int _lockd_request(struct cmd_context *cmd,
req_name, mode, vg_name, *result, *lockd_flags);
} else {
- reply = _lockd_send(req_name,
+ reply = _lockd_send_with_pvs(req_name,
+ lock_pvs,
"cmd = %s", cmd_name,
"pid = " FMTd64, (int64_t) pid,
"mode = %s", mode,
@@ -1134,6 +1319,7 @@ int lockd_start_vg(struct cmd_context *cmd, struct volume_group *vg, int start_i
int host_id = 0;
int result;
int ret;
+ struct lvmlockd_pvs lock_pvs;
memset(uuid, 0, sizeof(uuid));
@@ -1169,7 +1355,28 @@ int lockd_start_vg(struct cmd_context *cmd, struct volume_group *vg, int start_i
host_id = find_config_tree_int(cmd, local_host_id_CFG, NULL);
}
- reply = _lockd_send("start_vg",
+ /*
+ * Create the VG's PV list when start the VG, the PV list
+ * is passed to lvmlockd, and the the PVs path will be used
+ * to send SCSI commands for idm locking scheme.
+ */
+ if (!strcmp(vg->lock_type, "idm")) {
+ _lockd_retrive_vg_pv_list(vg, &lock_pvs);
+ reply = _lockd_send_with_pvs("start_vg",
+ &lock_pvs,
+ "pid = " FMTd64, (int64_t) getpid(),
+ "vg_name = %s", vg->name,
+ "vg_lock_type = %s", vg->lock_type,
+ "vg_lock_args = %s", vg->lock_args ?: "none",
+ "vg_uuid = %s", uuid[0] ? uuid : "none",
+ "version = " FMTd64, (int64_t) vg->seqno,
+ "host_id = " FMTd64, (int64_t) host_id,
+ "opts = %s", start_init ? "start_init" : "none",
+ NULL);
+ _lockd_free_pv_list(&lock_pvs);
+ } else {
+ reply = _lockd_send_with_pvs("start_vg",
+ NULL,
"pid = " FMTd64, (int64_t) getpid(),
"vg_name = %s", vg->name,
"vg_lock_type = %s", vg->lock_type,
@@ -1179,6 +1386,7 @@ int lockd_start_vg(struct cmd_context *cmd, struct volume_group *vg, int start_i
"host_id = " FMTd64, (int64_t) host_id,
"opts = %s", start_init ? "start_init" : "none",
NULL);
+ }
if (!_lockd_result(reply, &result, &lockd_flags)) {
ret = 0;
@@ -1406,7 +1614,7 @@ int lockd_global_create(struct cmd_context *cmd, const char *def_mode, const cha
req:
if (!_lockd_request(cmd, "lock_gl",
NULL, vg_lock_type, NULL, NULL, NULL, NULL, mode, NULL,
- &result, &lockd_flags)) {
+ NULL, &result, &lockd_flags)) {
/* No result from lvmlockd, it is probably not running. */
log_error("Global lock failed: check that lvmlockd is running.");
return 0;
@@ -1642,7 +1850,7 @@ int lockd_global(struct cmd_context *cmd, const char *def_mode)
if (!_lockd_request(cmd, "lock_gl",
NULL, NULL, NULL, NULL, NULL, NULL, mode, opts,
- &result, &lockd_flags)) {
+ NULL, &result, &lockd_flags)) {
/* No result from lvmlockd, it is probably not running. */
/* We don't care if an unlock fails. */
@@ -1910,7 +2118,7 @@ int lockd_vg(struct cmd_context *cmd, const char *vg_name, const char *def_mode,
if (!_lockd_request(cmd, "lock_vg",
vg_name, NULL, NULL, NULL, NULL, NULL, mode, NULL,
- &result, &lockd_flags)) {
+ NULL, &result, &lockd_flags)) {
/*
* No result from lvmlockd, it is probably not running.
* Decide if it is ok to continue without a lock in
@@ -2170,6 +2378,7 @@ int lockd_lv_name(struct cmd_context *cmd, struct volume_group *vg,
uint32_t lockd_flags;
int refreshed = 0;
int result;
+ struct lvmlockd_pvs lock_pvs;
/*
* Verify that when --readonly is used, no LVs should be activated or used.
@@ -2235,13 +2444,28 @@ int lockd_lv_name(struct cmd_context *cmd, struct volume_group *vg,
retry:
log_debug("lockd LV %s/%s mode %s uuid %s", vg->name, lv_name, mode, lv_uuid);
- if (!_lockd_request(cmd, "lock_lv",
- vg->name, vg->lock_type, vg->lock_args,
- lv_name, lv_uuid, lock_args, mode, opts,
- &result, &lockd_flags)) {
- /* No result from lvmlockd, it is probably not running. */
- log_error("Locking failed for LV %s/%s", vg->name, lv_name);
- return 0;
+ /* Pass PV list for IDM lock type */
+ if (!strcmp(vg->lock_type, "idm")) {
+ _lockd_retrive_lv_pv_list(vg, lv_name, &lock_pvs);
+ if (!_lockd_request(cmd, "lock_lv",
+ vg->name, vg->lock_type, vg->lock_args,
+ lv_name, lv_uuid, lock_args, mode, opts,
+ &lock_pvs, &result, &lockd_flags)) {
+ _lockd_free_pv_list(&lock_pvs);
+ /* No result from lvmlockd, it is probably not running. */
+ log_error("Locking failed for LV %s/%s", vg->name, lv_name);
+ return 0;
+ }
+ _lockd_free_pv_list(&lock_pvs);
+ } else {
+ if (!_lockd_request(cmd, "lock_lv",
+ vg->name, vg->lock_type, vg->lock_args,
+ lv_name, lv_uuid, lock_args, mode, opts,
+ NULL, &result, &lockd_flags)) {
+ /* No result from lvmlockd, it is probably not running. */
+ log_error("Locking failed for LV %s/%s", vg->name, lv_name);
+ return 0;
+ }
}
/* The lv was not active/locked. */
--
1.8.3.1