68 lines
2.8 KiB
Diff
68 lines
2.8 KiB
Diff
|
From: Dario Faggioli <dfaggioli@suse.com>
|
||
|
Date: Wed, 28 Sep 2022 13:13:08 +0200
|
||
|
Subject: linux-user: use "max" as default CPU model, to deal with x86_64-v2
|
||
|
binaries
|
||
|
|
||
|
Git-commit: 0000000000000000000000000000000000000000
|
||
|
References: bsc#1203684
|
||
|
|
||
|
The old "qemu64" model cannot run binaries compiled for, e.g.,
|
||
|
x86_64-v2. This could be a problem because a couple of major
|
||
|
distribution are switching to that as their baseline. In fact, errors
|
||
|
like this one can be observed (if 'ls' is such a binary):
|
||
|
|
||
|
x86_64-linux-user/qemu-x86_64 /usr/bin/ls
|
||
|
qemu: uncaught target signal 4 (Illegal instruction) - core dumped
|
||
|
|
||
|
Instead, using "max" as the CPU model, everything (of course) works:
|
||
|
|
||
|
export QEMU_CPU=max
|
||
|
x86_64-linux-user/qemu-x86_64 /usr/bin/ls
|
||
|
|
||
|
This has been and is being discussed in several places, e.g.:
|
||
|
https://lore.kernel.org/qemu-devel/20210607135843.196595-1-berrange@redhat.com/
|
||
|
https://bugzilla.redhat.com/show_bug.cgi?id=2079915
|
||
|
https://bugzilla.redhat.com/show_bug.cgi?id=2080133
|
||
|
http:s//github.com/containers/podman/issues/14314
|
||
|
|
||
|
However, these are all about system-emulation/virtualization, which is
|
||
|
indeed quite tricky. In fact, what would be a good alternative default
|
||
|
CPU model to pick, in that case? At the same time, however, it's also
|
||
|
less problematic. In fact, people using QEMU for that purpose are likely
|
||
|
in one of the following two situations already:
|
||
|
1) they're starting QEMU manually, with a long and complex command line,
|
||
|
for whatever specific reason. In that case, adding '-cpu host' (or
|
||
|
whatever) to such long and complex command line, isn't a big deal;
|
||
|
2) they're using QEMU via libvirt, which has its own fancy and
|
||
|
convenient ways of determining the best CPU model, and the default
|
||
|
"qemu64" one is pretty much never being used.
|
||
|
|
||
|
The case of Linux user emulation, however, it's a bit more tricky, as
|
||
|
it's less convenient to actually pass any parameter to QEMU at all, in
|
||
|
this scenario, so having to add one might be complicated. The same goes
|
||
|
for having to define the QEMU_CPU environment variable. When doing Linux
|
||
|
userspace emulation, though, a lot of the downsides of just using '-cpu
|
||
|
host' as the default are non-issue (e.g., we do not need to think about
|
||
|
migration!).
|
||
|
|
||
|
Therefore, while the topic remain complex and unsolved for system
|
||
|
emulation, for Linux user, let's just switch and be happy.
|
||
|
|
||
|
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
|
||
|
---
|
||
|
linux-user/x86_64/target_elf.h | 2 +-
|
||
|
1 file changed, 1 insertion(+), 1 deletion(-)
|
||
|
|
||
|
diff --git a/linux-user/x86_64/target_elf.h b/linux-user/x86_64/target_elf.h
|
||
|
index 7b76a90de8805a84b4983f3b2bb9..3f628f8d66197faae698cbec4e24 100644
|
||
|
--- a/linux-user/x86_64/target_elf.h
|
||
|
+++ b/linux-user/x86_64/target_elf.h
|
||
|
@@ -9,6 +9,6 @@
|
||
|
#define X86_64_TARGET_ELF_H
|
||
|
static inline const char *cpu_get_model(uint32_t eflags)
|
||
|
{
|
||
|
- return "qemu64";
|
||
|
+ return "max";
|
||
|
}
|
||
|
#endif
|