SHA256
1
0
forked from pool/qemu
qemu/qemu-nbd-Use-SOMAXCONN-for-socket-listen.patch
Bruce Rogers 112fb09f1a Accepting request 873002 from home:bfrogers:branches:Virtualization
- Fix uninitialized variable in ipxe driver code (boo#1181922)
  ath5k-Add-missing-AR5K_EEPROM_READ-in-at.patch
- Add a few improvements to the git-based package workflow scripts
- Include additional upstream patches designated as stable material
  and reviewed for applicability to include here
  blockjob-Fix-crash-with-IOthread-when-bl.patch
  monitor-Fix-assertion-failure-on-shutdow.patch
  qemu-nbd-Use-SOMAXCONN-for-socket-listen.patch
  qemu-storage-daemon-Enable-object-add.patch

OBS-URL: https://build.opensuse.org/request/show/873002
OBS-URL: https://build.opensuse.org/package/show/Virtualization/qemu?expand=0&rev=617
2021-02-17 02:19:36 +00:00

84 lines
3.4 KiB
Diff

From: Eric Blake <eblake@redhat.com>
Date: Tue, 9 Feb 2021 09:27:58 -0600
Subject: qemu-nbd: Use SOMAXCONN for socket listen() backlog
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Git-commit: 582d4210eb2f2ab5baac328fe4b479cd86da1647
Our default of a backlog of 1 connection is rather puny; it gets in
the way when we are explicitly allowing multiple clients (such as
qemu-nbd -e N [--shared], or nbd-server-start with its default
"max-connections":0 for unlimited), but is even a problem when we
stick to qemu-nbd's default of only 1 active client but use -t
[--persistent] where a second client can start using the server once
the first finishes. While the effects are less noticeable on TCP
sockets (since the client can poll() to learn when the server is ready
again), it is definitely observable on Unix sockets, where on Linux, a
client will fail with EAGAIN and no recourse but to sleep an arbitrary
amount of time before retrying if the server backlog is already full.
Since QMP nbd-server-start is always persistent, it now always
requests a backlog of SOMAXCONN; meanwhile, qemu-nbd will request
SOMAXCONN if persistent, otherwise its backlog should be based on the
expected number of clients.
See https://bugzilla.redhat.com/1925045 for a demonstration of where
our low backlog prevents libnbd from connecting as many parallel
clients as it wants.
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
CC: qemu-stable@nongnu.org
Message-Id: <20210209152759.209074-2-eblake@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
blockdev-nbd.c | 7 ++++++-
qemu-nbd.c | 10 +++++++++-
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/blockdev-nbd.c b/blockdev-nbd.c
index d8443d235b7338949a4e6e10dec5..b264620b98d8c024b147872ce089 100644
--- a/blockdev-nbd.c
+++ b/blockdev-nbd.c
@@ -134,7 +134,12 @@ void nbd_server_start(SocketAddress *addr, const char *tls_creds,
qio_net_listener_set_name(nbd_server->listener,
"nbd-listener");
- if (qio_net_listener_open_sync(nbd_server->listener, addr, 1, errp) < 0) {
+ /*
+ * Because this server is persistent, a backlog of SOMAXCONN is
+ * better than trying to size it to max_connections.
+ */
+ if (qio_net_listener_open_sync(nbd_server->listener, addr, SOMAXCONN,
+ errp) < 0) {
goto error;
}
diff --git a/qemu-nbd.c b/qemu-nbd.c
index a7075c5419d710d773a5c5ed749f..39b517c948b4c45544e01fc3f070 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -969,8 +969,16 @@ int main(int argc, char **argv)
server = qio_net_listener_new();
if (socket_activation == 0) {
+ int backlog;
+
+ if (persistent) {
+ backlog = SOMAXCONN;
+ } else {
+ backlog = MIN(shared, SOMAXCONN);
+ }
saddr = nbd_build_socket_address(sockpath, bindto, port);
- if (qio_net_listener_open_sync(server, saddr, 1, &local_err) < 0) {
+ if (qio_net_listener_open_sync(server, saddr, backlog,
+ &local_err) < 0) {
object_unref(OBJECT(server));
error_report_err(local_err);
exit(EXIT_FAILURE);